Jump to content


Popular Content

Showing content with the highest reputation since 03/09/20 in all areas

  1. 11 points
    Hi everyone: I am Squids wife. I just wanted everyone to know he will be 50 on Sunday March 22nd, If you all can wish him a happy birthday that would be great.Due to Covid 19 - no party. Thanks Tracey
  2. 10 points
    Just caught onto this today (Thx @SpaceInvaderOne !), saw we're "only" #2, which just won't do --- Just "remembered" I have a Threadripper 2950x new in box - was going to sell the old dual Xeon E5 V2s and upgrade, but now going to bring this out & join the fray with the I9-9900 Hackintosh [AMD 580] and Ryzen 3700x. The threadripper will have to go "benchtop bare" for now, but that's OK. Should probably just use the office for a sauna now 🥵. Think the UPS is sweating a tad.... I am regional medical director for a company that does home medical visits on the sickest of the (US Medicare) population, IE top tier risk for COVID, avg. patient age 80+. We have offices in all the top affected cities in US so far. We're working nonstop to try to keep our patients safe at home. We've had to retreat temporarily to mostly telephonic visits due to shortage of PPE (protective gear) til our supply improves so we don't spread it to them - very frustrating. Now I can feel better about being stuck at home, still helping on the compute side as well til we get to get back safely in their homes. I wanted to thank everyone here for being so eager to take part / take action and with such impressive results. It means alot in the medical world to see folks being resourceful and doing their part. Please stay home, stay safe, and round up some more CPU's for this !
  3. 10 points
  4. 9 points
    Use extra Unraid CPU or GPU computing power to help take the fight to COVID-19 with BOINC or Folding@Home! https://unraid.net/blog/help-take-the-fight-to-covid-19-with-boinc-or-folding-home Stay safe everyone. -Spencer
  5. 8 points
    This thread will serve as the support thread for the GPU statistics plugin (gpustat). Currently, a single nVidia card is supported. No testing outside this scenario has been done and is not guaranteed to work in any fashion. UPDATE: 2020-03-15 Released - Implemented Classful code to support additional vendors and (hopefully) fixed the issue with upgrade users settings not populating Prerequisite: 6.7.1+ Unraid-Nvidia plugin with nVidia build installed. Plugin is now live on CA but if you want to manually install see the below -- To review the source before installing (**You should always do this**): https://github.com/b3rs3rk/gpustat-unraid Manual Plugin Installation URL: https://raw.githubusercontent.com/b3rs3rk/gpustat-unraid/master/gpustat.plg Enjoy! ====================================================================== Information to Include when asking for Support: 1) the result of 'nvidia-smi -q -x -i 0' from the UnRAID console (via SSH or the webterminal is fine) 2) the result of 'cd /usr/local/emhttp/plugins/gpustat/ && php ./gpustatus.php' 3) a screenshot of the dashboard plugin (if issue is only seen during transcoding, then a snippet during transcode is best)
  6. 8 points
    Hi, may this helps in terms how to use the new function --net=container:Container_Name to use another dockers network, a nice feature to route traffic through a VPN docker when the client docker is not capable to use a proxy. sample usecase i use a VPN Docker container which provides a privoxy vpn or a socks vpn, but i have a docker like xteve which doesnt have the function to route traffic through a http or socks proxy, so when i want to use it through vpn i have to either set the whole mashine behind a vpn or build a docker which includes VPN AND xteve. Now with this feature enabled we can route any docker now through the VPN docker pretty easy. i describe 2 scenarios, 1. all dockers in custom:br0 with their own ip (nice feature which is working properly with host access since 6.8.2 as note) 2. VPN Docker like binhex privoxy, ovpn privoxy, ... on host in bridge mode (port mappings needed) to 1. basic situation before bridged to VPN ovpn_privoxy is my vpn docker connected to my vpn provider and providing as mentioned a http and socks proxy, xteve cant use this features. as mentioned, here my dockers are each on br0 with their own ip, now i ll bridge xteve to use the vpn docker todo so, simply remove the network from xteve and add the following line in this usecase to extra parameters --net=container:ovpn_privoxy now xteve will use the network stack from the vpn container, will look like this xteve docker now doesnt have a own ip anymore and using the container:ovpn_privoxy as network. to reach xteve webui now u enter the ip from ovpn_privoxy and the port from the client app, in this usecase, now the xteve external traffic will use the vpn connection from ovpn_privoxy, thats it here thanks to limetech now now when adding another container u can do so, just beware, as there is only one network stack left, its not possible to use apps which uses the same ports, sample here would be, i want a second instance of xteve run through the vpn docker, both listening on 34400, would NOT work, even they resist in their own dockers, the network stack is unique from the ovpn docker here ... so either the 2nd, 3rd, ... app can use different ports (like xteve can be switched to any port) or its just not possible cause ports are unique ... sample with a second working app like above, ovpn_privoxy is the docker providing the network, now for a 2nd "client" docker, to reach the clients now <- xteve app <- emby app of course is the http proxy (port 8080) and socks proxy (port 1080) also still available, has no influence ... i hope this helps howto use the --net.... extra parameter now, to 2. (VPN docker is running on host unraid in bridge mode) only difference is now, u have to add the port mappings to the VPN docker, in this case i would add 34400:34400 and 6555:6555 to the VPN docker would result here in this (my unraid server has the ip thats the only difference when using the VPN docker in bridge mode, now your vpn and apps are all accessed via in both usecases there is another nice feature limetech added, as soon the VPN docker gets an update, the "client docker(s)" need to update too which is in the end a restart only to fetch the correct network stack ... u should see a update notification on all dockers relating to the VPN docker as soon that one received an update or u changed something on this docker, if so, please push update or restart the docker(s), shouldnt be too often (depending on update frequency of your VPN docker) in case i can do something better, let me know to correct it.
  7. 8 points
    Update: We're up over 1,200+ Unraid users across the BOINC and Folding@home Unraid teams. Wow- thank you all. 🙏 http://boinc.bakerlab.org/rosetta/team_display.php?teamid=18943 https://folding.extremeoverclocking.com/team_summary.php?s=&t=227802
  8. 7 points
    Everyone is "right" in this topic, let me explain: First, Andrew, aka @Squid was given the "ok" by me to publicize the BOINC and Folding@home plugins for purpose of making people aware of their existence, please get off his back for this. CA is an awesome piece of software and @Squid has curated both it and the appfeed with the utmost care and respect. Honestly, I've been personally swamped with many things and didn't think much of it, other than thinking it was a great idea because a lot of people feel helpless and this is a tangible - albeit small - thing they can do in this time we are living through now. We have users all over the world including areas severely affected and we have received messages of appreciation for bringing attention to this. That said, it has been our policy since the beginning: We do not send unsolicited e-mail to anyone, nor do we authorize anyone else to do so on our behalf. Exception: we may send email notifications of critical security updates. In hindsight I see this was not right to send the notification email and we'll see to it that it doesn't happen again. Finally I don't see any real purpose in re-opening this topic. I think all's been said that needs to be said.
  9. 7 points
    https://github.com/electrified/asus-wmi-sensors <- this would be nice to for 6.9 for ASUS peoples. It directly reads the WMI interface that ASUS has moved to and displays all sensors properly.(Supposedly)
  10. 6 points
    Hey 👋 This started off as a bit of a hobby project that slowly become something that I thought could be useful to others. It was an exercise for me in writing a new app from scratch and the different choices I would make, compared to having to constantly iterate on an existing (large) code base. After sharing this with some of the community in the unofficial discord channel, I was encouraged to get it into a state where it makes sense for others to use. https://play.google.com/store/apps/details?id=uk.liquidsoftware.companion I've already received some great feedback as well as a number of issues and requests for new features, that I hope to add soon. I hope others will find this as useful as I do in managing your UNRAID servers. Enjoy
  11. 6 points
    Multiple cache pools being internally tested now. Multi array pools not in the cards for this release.
  12. 6 points
    There is unlikely to be a cure. COVID-19 is caused by a virus so focus would be on vaccine and treatment (to alleviate symptoms). And then we have the side burden of dealing with anti-vaxxers and idiots but the no amount of Rosetta@home can help with that.
  13. 6 points
    Hey guys, here's an updated boinc docker image: https://hub.docker.com/r/linuxserver/boinc It's brand spanking new, based on ubuntu bionic. Contains both the boinc client and manager, so no need to install anything on another machine. It's not suitable for an in-place upgrade from my old rdp-boinc image, but setting it up from scratch takes only a minute or two. This one works with intel and nvidia (with the nvidia plugin) gpus (as long as the project supports it). There will be a template and a support thread soon, but until then, you can just create it with the vars from the dockerhub description.
  14. 6 points
    Joined the team today. A big virtual hug to all contributors from shutdown Italy. Hope you all doing well and thanks for Unraid notification to @Squid
  15. 6 points
    One suggestion for the Boinc Install instructions. In order for the File->Select Computer option to show up, you have to switch to Advanced View. Simple view does not show that option.
  16. 5 points
  17. 5 points
  18. 5 points
  19. 4 points
    Anyone looking at doing a Jitsi docker for UnRaid? https://jitsi.org/
  20. 4 points
    Welcome to 6.9 beta release development! This initial -beta1 release is exactly the same code set of 6.8.3 except that the Linux kernel has been updated to 5.5.8 (latest stable as of this date). We have only done basic testing on this release; normally we would not release 'beta' code but some users require the hardware support offered by the latest kernel along with security updates added into 6.8. Important: Beta code is not fully tested and not feature-complete. We recommend running on test servers only! Unfortunately none of our out-of-tree drivers will build with this kernel. Some were reverted back to in-tree version, some were omitted. We anticipate that by the time 6.9 enters 'rc' phase, we'll be on the 5.6 kernel and hopefully some of these out-of-tree drivers will be restored. We will be phasing in some new features, improvements, and changes to address certain bugs over the coming weeks - these will all be -beta releases. Don't be surprised if you see a jump in the beta number, some releases will be private. Version 6.9.0-beta1 2020-03-06 Linux kernel: version 5.5.8 igb: in-tree ixgbe: in-tree r8125: in-tree r750: (removed) rr3740a: (removed) tn40xx: (removed)
  21. 4 points
    https://github.com/jitsi/docker-jitsi-meet Anyone know how to install this on unraid? I'm spoiled with the unraid apps to install containers.
  22. 4 points
    Thanks to @bonienl this is coming in 6.9 release!
  23. 4 points
    BOINC team coming in at #2 in the world!!! https://boinc.bakerlab.org/rosetta/top_teams.php
  24. 4 points
    Made a quick video that may help people having difficulty setting up
  25. 4 points
    Either don't set it to 100%, or make it run on all cores except for the 0. Unraid has a tendency to be unresponsive when other tasks push all cores to 100%
  26. 4 points
    I appreciate the work that Squid does and all the developers that work on Unraid/the community applications. I purchased Unraid multiple times and will continue to do so, however I'd like to take a civil approach to asking that any developer realize this is not okay. I am all for donating and giving time to organizations but that's my decision. I would never ask and have never asked in anything I've distributed for free or paid for something like this. I think this is irresponsible and a blatant disrespect of the community. I feel a much more appropriate tone would have been a post just in the forums or an email even. Instead we are invaded by messages that should not be. I didn't bring politics or ethics into the equation. I expect the developers of Unraid/CA respect that my server is a business and the forums are the community.
  27. 4 points
    That base is too old. I'll look into creating a new boinc image when I get a chance, on a newer base and perhaps with gpu support
  28. 4 points
    One of the best software purchases I ever made.
  29. 4 points
    For the next little while, the moderators of CA will try and keep those two applications on the main landing page of CA (New Apps) to gain some further exposure
  30. 4 points
    Once I heard about F@H having coronavirus work units I went ahead and installed it on my work laptop (running full tilt all the time), my main windows driver, and my HTPC. While it is immaterial what team you use (if you even want one), if you do choose, unRaid's F@H team number is 227802. If you do happen to be one of those persons who feel that this isnt a big deal, that choice is yours, but bear in mind that donating some spare cycles costs you nothing and can only help.
  31. 4 points
    Maybe sooner if you can bribe him with small fish, crabs, and shrimp.
  32. 3 points
    In the next few days, CA's application feed will be grabbing the download statistics for the various boinc and folding @ home applications and generating the stats. As a precursor though, this is the stats for linuxserver folding@home and boinc applications. CA will not generate the graphs for these 2 applications because they are too new to the system, and not enough data is available to create any credible graph. LinuxServer Boinc Added to CA on March 18th. At the time of it being added, there were 179 downloads. Today (April 6) there are now 60,950 downloads LinuxServer Folding@Home Added to CA on March 20th. At the time of it being added, there were 14 downloads. Today there are 134,884 downloads Graphs are available within CA for the original folding at home app from CaptInsano from a few years ago (long deprecated) that barely worked (if at all). It's gone from 99,320 on March 5th to 102,450 on April 4th. While this is a very modest increase, the graphs tell a far different story for just what is going on. Very steep climb on people trying it out Once graphs are available within CA on the official boinc container and on mobiusnine's folding@home container within a couple of days) I will post them. I expect that each of them will see a very steep rise in the downloads per month. (And the graphs currently available which are from early March are very suggestive of just where they are going to wind up)
  33. 3 points
    This too coming in 6.9 release.
  34. 3 points
    Some considerations on using the BOINC docker for Rosetta@Home. Performance and Memory concerns. BOINC defaults to using 100% of the CPUs. Also, by default, Rosetta will process 1 task per cpu core/thread. So if you have an 8 core machine (16 with HT) it will attempt to process 16 tasks at once. Even if you set docker pinning to specific cores, the docker image will see all available cores and begin 1 task per core/thread. If you want to limit the number of tasks being processed, change the setting for using % of CPUs. So using the 8 core machine example above, setting it to 50% would process 8 tasks at a time. Regardless of how you set up CPU pinning. RAM and out of memory errors. Some of the Rosetta jobs can consume a lot of RAM. I have noticed individual tasks consuming anywhere between 500Mb-1.5Gb of RAM. You can find the memory a task is using by selecting the task, and clicking the properties button. If the task runs out of memory, it may get killed, wasting the work done or delaying it. It is helpful to balance the number of tasks you have running to the amount of RAM you have available. In the example machine above, if I am processing 8 tasks, I might expect the RAM usage to be anywhere from 4Gb to 10Gb. The docker FAQ has instructions on limiting the amount of memory the docker container uses, but be aware that processing too many tasks and running out of memory will just kill them and delay processing. My real world example. CPU: 3900X 12-core (24 w/ Hyperthreading) RAM 32GB Usage limit set to 50%, so processing only 12 tasks at a time. RAM limited to 14G, I could go a little higher, but havent needed to. Most tasks stay under 1Gb CPU pinning to almost all available cores Actual CPU usage looks like Since putting those restrictions on, I have had very stable processing and no out of memory errors.
  35. 3 points
    Please don't take offense at this, but you probably will anyway. If the only or overriding reason you are running this is to be directly compensated in some way, don't run it.
  36. 3 points
    Which is why I am asking - to show a demand
  37. 3 points
  38. 3 points
    Hey guys, just created the template for the new linuxserver boinc image, it should show up in CA soon. Here's the support thread: As I said before, this one is new. It contains both the client and the manager, so no need to install anything on a different machine. It supports gpu (although rosetta@home is currently not handing out gpu jobs, some of the other projects do). Also, this one is multi-arch, so feel free to put any raspberry pis or other iot devices to use as well 😉 @squid when this is listed in CA, feel free to deprecate my old RDP Boinc one
  39. 3 points
    Thanks for posting this; had no idea it existed. Added 32 cores and 3 GPU's. I just got home (work in the ED) and I can say without a doubt that folks are getting a little too worked up here in the U.S. and the worst is yet to come! Please, please, please, if you asymptomatic, DO NOT go to the ED's! We need the beds for acute emergencies. Use telemedicine services, even if it is only a phone call. The trained triage ear of a healthcare provider can tell a lot about a person form just listening to them. Wash your hands!!!! Try not to touch your face!!! Cough/sneeze into your elbow (think the Dracula pose). Oh, and wash your hands!!!!!! 😉 I'm beat... g'night everyone. Thanks again for the idea here @SpencerJ!
  40. 3 points
    Just updated it. That fixed it. Thank you very much
  41. 3 points
    All discussion CA & notifications has been moved here
  42. 3 points
    I agree with this. There should be a standard agreed on by the developers and the community at large. Discussion is key here. If CA did this now (my first experience with this in Unraid) where does it end? Do we start telling people about religions? Presidential candidates?
  43. 3 points
    Can this recent discussion be spawned off onto a feedback topic since it goes beyond just CA but what in general should be announced and what shouldn't etc.
  44. 3 points
    THIS is NOT how you go about discussing a subject of this importance. Just because 800 people joined the program does NOT mean they personally feel it is okay for CA to do this. Now or ever! Your post is a problem and I'm all for toxicity however I paid for Unraid and would like not to be bothered by an agenda on my own hardware. Even if I do donate my hardware to the cause using those projects I won't do it with the "Unraid Team" if this is how CA conducts in the future. You can dismiss alael, myself, and others... But doing so is not going to help the community. We're not a fringe group on the edge of political dissidence, we believe in "our hardware, our right".
  45. 3 points
    Wow, strong complaints about something rather benign compared to most OSes. I guess those of you complaining have never used Windows 10. Notifications from everything under the sun kept popping up and I finally disabled them all. The problem with that is that there are some notifications I actually do want to receive, so, the only options is to turn them back on and then disable them one by one per app/third party. I am guaranteed to get at least one from who-knows-what new source before I can decide if I want to disable it. Frankly, I am 100% behind what Squid did with the notification system and agree with the way he has implemented it. It is much less intrusive than the Windows 10 notification system and not open to general third-party abuse. Different strokes for different folks, I guess, but I do not see it as a security risk or inappropriate at all given its intended purpose. I guess I have no problem with "Community Apps" notifying me of something that I consider to be a "community" issue but I respect the rights of others to hold differing opinions on the matter.
  46. 3 points
    Tried to give my GPU as well on my windows VM but for some reason it doesn't appear to be using my GPU at all. Hope the watercooling holds up hehe. Edit: Running fold@home as well to get the GPU going
  47. 3 points
    I’ve got one guy on reddit enlisted to send me some sample data from the intel-gpu-tools package. If it is workable I will try to add it. There’s a build for Slackware 14.2 so it’s not like the binaries aren’t there. I just have Xeons in my box so I can’t test it at all. Sent from my iPhone using Tapatalk
  48. 3 points
    I would love Intel iGPU support independent of the UnRaid Nvidia build. My servers have the Intel iGPUs so I do not have to rely on UnRaid Nvidia (not that there is anything wrong with it 😁) Perhaps by using intel_gpu_top or other elements of intel-gpu-tools. Although it is not nearly as useful as nvidia-smi
  49. 3 points
    may a simple sample i run my ovpn_privoxy docker and i want another docker (sample xteve) run through the vpn connection cause the app doesnt provide a proxy feature so i setup the xteve docker as followed will result in now xteve docker is using the network from my ovpn_privoxy docker ... if the vpn docker is in bridge mode, u have to map the port(s) from the client docker(s) of course ... in this sample 34400 34400 on the ovpn_privoxy docker works and tested here as note, very nice and thanks again to limetech for adding this
  50. 3 points
    Just want to throw this out there, one of the main reasons some of us choose to mess with compose is to get around some of the limitation of the unRAID template system. In particular when it comes to complex multi-container applications, which often use several frontend and backend networks.