Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 03/25/20 in all areas

  1. 4 points
    https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.6-Released
  2. 3 points
    Thanks to @bonienl this is coming in 6.9 release!
  3. 3 points
    Some considerations on using the BOINC docker for Rosetta@Home. Performance and Memory concerns. BOINC defaults to using 100% of the CPUs. Also, by default, Rosetta will process 1 task per cpu core/thread. So if you have an 8 core machine (16 with HT) it will attempt to process 16 tasks at once. Even if you set docker pinning to specific cores, the docker image will see all available cores and begin 1 task per core/thread. If you want to limit the number of tasks being processed, change the setting for using % of CPUs. So using the 8 core machine example above, setting it to 50% would process 8 tasks at a time. Regardless of how you set up CPU pinning. RAM and out of memory errors. Some of the Rosetta jobs can consume a lot of RAM. I have noticed individual tasks consuming anywhere between 500Mb-1.5Gb of RAM. You can find the memory a task is using by selecting the task, and clicking the properties button. If the task runs out of memory, it may get killed, wasting the work done or delaying it. It is helpful to balance the number of tasks you have running to the amount of RAM you have available. In the example machine above, if I am processing 8 tasks, I might expect the RAM usage to be anywhere from 4Gb to 10Gb. The docker FAQ has instructions on limiting the amount of memory the docker container uses, but be aware that processing too many tasks and running out of memory will just kill them and delay processing. My real world example. CPU: 3900X 12-core (24 w/ Hyperthreading) RAM 32GB Usage limit set to 50%, so processing only 12 tasks at a time. RAM limited to 14G, I could go a little higher, but havent needed to. Most tasks stay under 1Gb CPU pinning to almost all available cores Actual CPU usage looks like Since putting those restrictions on, I have had very stable processing and no out of memory errors.
  4. 3 points
    Please don't take offense at this, but you probably will anyway. If the only or overriding reason you are running this is to be directly compensated in some way, don't run it.
  5. 2 points
    Hi guys, unfortunately I see the same issue with SMB. Here is my results comparing Performance and On Demand CPU profile. Pstates are disabled in config, while running my CPU at max frequency even in idle. Here is write speeds once again: Currently, this prevents me to switch to unRAID in production. Shouldn't it be the high priority issue? My Specs: 10GbE i9-9900 NVME Cache intel_pstate=disable
  6. 2 points
    Greetings, I'm still trying to figure out how to author my own CA apps. But for now here's an easy setup that I'm pretty sure a number of you will appreciate. Searx, is a self hostable meta search engine with a focus on privacy and complete control. Here's their description from the site: "Searx is a free internet metasearch engine which aggregates results from more than 70 search services. Users are neither tracked nor profiled. Additionally, searx can be used over Tor for online anonymity." Features: (Also pilfered from their site) Self hosted No user tracking No user profiling About 70 supported search engines Easy integration with any search engine Cookies are not used by default Secure, encrypted connections (HTTPS/SSL) Hosted by organizations, such as La Quadrature du Net, which promote digital rights Links: Homepage: https://asciimoo.github.io/searx List of publicly hosted engines: https://searx.space/ Wiki: https://github.com/asciimoo/searx/wiki Source Code: https://github.com/asciimoo/searx Twitter Account: https://twitter.com/Searx_engine Ok, Now down to the setup You'll need to "Enable additional search results from dockerhub" (fig.01) Head on over to the community apps tab and search for "searx" Next click the text "Click Here To Get More Results from DockerHub" There's a number of results. We'll want to choose the actual author of the build. So look for the result below: In the setup we'll want to assign a verify the host adapter is set to "Bridge" We'll need a port to access it by so click "Add another Path, Port, Variable, Label or Device" and select "Port" from the drop down I've named it: Web UI set the Container Port to 8080 (this is the port searx listens to by default) I set the Host port to "8843" (any unused port will do here, just remember it for later) Click "ADD" And now before we finish lets make a few tweaks. We'll be adding a WebUI to the drop down in the dashboard and we'll assign it an Icon For the Icon URL paste in the following link: https://asciimoo.github.io/searx/_static/searx_logo_small.png For the WebUI paste the following: http://[IP]:[PORT:8843]/ (remember I mentioned that port number?) Click "Apply" Your Dashboard Icon should look like this: You now have your own self-hosted private search engine! In the preferences you'll be able to configure which search engines you'll want to use by default. It even searches those "Linux ISO" sites. I'll leave the rest up to your imagination. (I'm personally a fan of the legacy theme, as shown above) Enjoy your privacy! Hope this helps everyone. ~Iron
  7. 2 points
    if you use pia and you are seeing the above in your log then the issue is that the pia api is down, looks like they are having technical difficulties right now, see here:- https://www.reddit.com/r/PrivateInternetAccess/comments/fs7ja0/cant_get_forwarded_port/ For now your only option is to set STRICT_PORT_FORWARD to 'no', this will allow you to connect but you will NOT have a working incoming port, so speeds will be slow at best. Just to be clear, there is nothing i can do about this guys, its a vpn provider issue.
  8. 2 points
  9. 2 points
    This can be caused by the RFS file system used with 4.7 if the drives are nearly full. Sorry but there is no fix for this problem except to convert to another file system. (Basically, the fix is copy the data from one of the RFS drives to a new drive with one of the new formats. Format that old RFS drive to a new format and repeat with the next RFS drive.)
  10. 2 points
    Here are the hardware requirements: I would recommend at least 4GB of RAM. Some folks have had problems updating from one version to the next version with only 2GB of RAM. (Updates now happen much quicker because of security patches required for those using VM's and Dockers.)
  11. 2 points
    Finnally got irregular number of cores assignments work. Details can be foud in git issues. I will update the manual and config.plist later. 3-cores/6-threads <vcpu placement='static' current='6'>8</vcpu> <vcpus> <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/> <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/> <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/> <vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/> <vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/> <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/> <vcpu id='6' enabled='no' hotpluggable='yes'/> <vcpu id='7' enabled='no' hotpluggable='yes'/> </vcpus> <cputune> <vcpupin vcpu='0' cpuset='6'/> <vcpupin vcpu='1' cpuset='14'/> <vcpupin vcpu='2' cpuset='7'/> <vcpupin vcpu='3' cpuset='15'/> <vcpupin vcpu='4' cpuset='5'/> <vcpupin vcpu='5' cpuset='13'/> </cputune> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> </cpu> 5-cores/5-threads <vcpu placement='static' current='5'>8</vcpu> <vcpus> <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/> <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/> <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/> <vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/> <vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/> <vcpu id='5' enabled='no' hotpluggable='yes'/> <vcpu id='6' enabled='no' hotpluggable='yes'/> <vcpu id='7' enabled='no' hotpluggable='yes'/> </vcpus> <cputune> <vcpupin vcpu='0' cpuset='6'/> <vcpupin vcpu='1' cpuset='14'/> <vcpupin vcpu='2' cpuset='7'/> <vcpupin vcpu='3' cpuset='15'/> <vcpupin vcpu='4' cpuset='8'/> </cputune> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='8' threads='1'/> </cpu>
  12. 2 points
    Kinda interesting since this morning my large server with dual 2GHz processors has been at an idle while connected to the Rosettta@home client. I restarted the docker to make sure nothing broke and after it connected again it is still not being utilized. My faster machines are still hard at work leading me to believe that they now have so much surplus of available machines they now may be utilizing only the faster equipment in the pool. This is a good thing and more than enough in this effort is a blessing. If I don't see activity by tomorrow I may assign the slower server to another service so it too will be getting used in a productive way. Either way I'm tickled that our group has shown so much compassion and human spirit in this crisis.
  13. 2 points
  14. 2 points
    Umm, is this how skynet gets started?!? We were so preoccupied with whether or not we could, we didn’t stop to think if we should. 😜
  15. 1 point
    Your most welcome. It doesn't i downloaded Postgres:11 its on CA already
  16. 1 point
    Have you had a look at Booksonic? Might do what you want.
  17. 1 point
    Folding @ Home seems to have a steady supply; particularly for GPU's 👍 ************************************************************************************** This is what the unRAID Docker image looks like under 'APPS'
  18. 1 point
    That's not recommended. Unraid's gui should be protected from general access, use a VPN if you need a WAN connection. The other services you expose should be evaluated on a case by case basis. Unraid's gui is not yet ready to be exposed. That's the end goal, but we're not there yet.
  19. 1 point
  20. 1 point
    Pretty much anything works. Just stick with the better known names and recent hardware. (Exception is the recommended LSI SAS/SATA cards) If you are looking at running a VM, be sure to read the VM sections in both the update guide and the manual for the current version 6. Hardware is bit more restrictive depending on how close you want to get to the 'bare metal' experience.
  21. 1 point
    See if this helps: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  22. 1 point
    Just updated to the latest branch a few minutes ago and confirmed this fixed this recent pgrep: cannot allocate 4611686018427387903 bytes issue for me. Thank you for correcting this so quickly and all the hard work you do for us @binhex
  23. 1 point
    I’m trying to use your DOH-server on my iOS devices with DNSCloak, but sadly it doesn’t work. I’ve used this site https://dnscrypt.info/stamps/ to generate a stamp (https://blog.privacytools.io/adding-custom-dns-over-https-resolvers-to-dnscloak/). I’ve already tried different combinations, but it won’t connect. Could you explain how to use it with DNSCloak By the way, is there a way to test whether the DOH-server is working correctly, with a curl command or something like that? EDIT: It’s working now. Just found out that letsencrypt didn’t start, because the template caused an error. After changing it based on my other configs letsencrypt did start again and DOH-server did work instantly using DNSCloak. As far as testing goes you can use curl --doh-url SERVER www.example.com.
  24. 1 point
    I would be very interested as well!
  25. 1 point
    Hey 👋 This started off as a bit of a hobby project that slowly become something that I thought could be useful to others. It was an exercise for me in writing a new app from scratch and the different choices I would make, compared to having to constantly iterate on an existing (large) code base. After sharing this with some of the community in the unofficial discord channel, I was encouraged to get it into a state where it makes sense for others to use. https://play.google.com/store/apps/details?id=uk.liquidsoftware.companion I've already received some great feedback as well as a number of issues and requests for new features, that I hope to add soon. I hope others will find this as useful as I do in managing your UNRAID servers. Enjoy
  26. 1 point
    Thank you. A Space Bar sneaked it self before the User Key. Thank you. Will be way better now.
  27. 1 point
    @ieronymous The default Linux template is kinda ok. Some distros won't work if they don't come with the virtio or scsi drivers. Some have issues with the machine type version. For example Pfsense only works with Q35-2.6. Best performance for the disk you will always get with raw and scsi. In most cases this should work.
  28. 1 point
    You need setuptools Thank you. And done. Sensing the pattern, I searched for json after I saw the following: Traceback (most recent call last): File "/usr/bin/docker-compose", line 6, in <module> from pkg_resources import load_entry_point File "/usr/lib64/python3.8/site-packages/pkg_resources/__init__.py", line 3252, in <module> def _initialize_master_working_set(): File "/usr/lib64/python3.8/site-packages/pkg_resources/__init__.py", line 3235, in _call_aside f(*args, **kwargs) File "/usr/lib64/python3.8/site-packages/pkg_resources/__init__.py", line 3264, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/lib64/python3.8/site-packages/pkg_resources/__init__.py", line 583, in _build_master ws.require(__requires__) File "/usr/lib64/python3.8/site-packages/pkg_resources/__init__.py", line 900, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib64/python3.8/site-packages/pkg_resources/__init__.py", line 786, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'jsonschema<4,>=2.5.1' distribution was not found and is required by docker-compose No dice. Have any advice how I can determine these dependencies and root them out? Thank you again.
  29. 1 point
    Rock solid. No instability to speak of. I'm running an older F12e BIOS due to issues I reported on page 3 about F12i BIOS. There is the latest F12 (no "e" nor "i") BIOS which I can't be bothered to update to since as mentioned, everything is rock solid. For a primarily gaming build, I would recommend you also consider Intel single-die CPU offering too. While having lower maximum performance, Intel single-die design means you get more consistent gaming performance (e.g. lower latency, less fps variability aka stuttering etc.). My VM is workstation-first, gaming-second (and I can't tell the diff with fps variability but I know someone who can) so TR is perfect for me.
  30. 1 point
    Overview: Support for Docker image Shinobi Pro Documentation: https://shinobi.video/docs/ Video Guide: Showing how to setup and configure Shinobi Pro. If you want to run Shinobi Pro through a reverse proxy then below is a config file that you can edit. Save it as shinobi.subdomain.conf # make sure that your dns has a cname set for Shinobi server { listen 443 ssl; listen [::]:443 ssl; server_name shinobi.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; proxy_pass http://IP-OF-CONTAINER:8080; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; } } If you appreciate my work, then please consider buying me a beer
  31. 1 point
    Onboard SATA ports are usually good enough, but there are several Ryzen users that have issues with the onboard SATA controller where it stops responding, mostly if IOMMU is enable, you can try them and if there are issues use the HBA.
  32. 1 point
    lol demand unit that's would probably be the new currency here in the forums
  33. 1 point
    No, licenses are tied to the USB flash drive.
  34. 1 point
    Dear Lime tech. I would like to see some direct feedback in the web gui related to performance data for individual drives and the average parity check speed to better judge if i should consider replacing individual drives or if my array is performing normally. With your current userbase and lots of different hardware-configuration, each array already has the data ready to harvest, and to further the lifespan of each system since we with such a feature better know what can be considered a normal average speed for a array parity check, or average drive speeds. A question i have seen frequently appear in the forums, or what can be considered normal wear and tear, based on Smart-data or individual performance of each drive after a parity check has been completed. If such recommendations were to be presented in a user friendly and intuitive way, it can prevent data loss and help end users better understand when drives should be replaced and to make informed decisions when considering proactive maintenance. A new consent option in the user interface could allow for periodic uploads of this kind of user data, and it can be presented with notification boxes or as more detailed info in the smart-data section/elsewhere. Hopefully i have presented my suggestion well enough that this can be implemented. I would be very happy if it comes to fruition.
  35. 1 point
    With multiple cache pools, you can get faster SSD storage that is capable of having redundancy. There are lots of ways to use that. Then just use the much larger, cheaper, and slower HDDs for archiving. Just upsize HDDs instead of adding more. More disks requires more ports and other hardware, more license if you don't already have max, and each disk is just another point of failure. I've never understood why some people have 20 or more 2TB disks in their array.
  36. 1 point
    Skitals would you be able to {compile the latest kernel} = {magic} with the patches again for us soon ? Thanks
  37. 1 point
    More accurately, it would be that the fstrim command itself thinks that the drive(s) are SSDs. Since you can't do anything about the command itself, what you'd have to do is forget about the plugin, and run the appropriate commands via the user scripts plugin on an appropriate schedule. IE: fstrim /mnt/cache -v Would only trim your cache drive
  38. 1 point
  39. 1 point
    I just resolved my issue. From the dockers console I ran the following commands which allowed me to login. /usr/local/openvpn_as/scripts/sacli --key "vpn.server.daemon.enable" --value "false" ConfigPut /usr/local/openvpn_as/scripts/sacli --key "vpn.daemon.0.listen.protocol" --value "tcp" ConfigPut /usr/local/openvpn_as/scripts/sacli --key "vpn.server.port_share.enable" --value "true" ConfigPut /usr/local/openvpn_as/scripts/sacli start
  40. 1 point
    Hi, The following comes from me, a new Unraid user, one who understands the value the product offers yet has found the product to be quite technically challenging to get going to way I need. Although I'm not a linux guru, I have a pretty typical tech background. Therefore I think I represent a pretty large addressable market. First off, this obviously isn't news but to me the product seems (or was) focused on the headless NAS market. This is great as far as it goes, but I think it's probably being more used as a workstation OS virtualization product these days. My attempts to get OS virtualization going leave me feeling like my attempts to use ESXi in a similar way. Although I think I'm close to getting a solution running that meets my needs, it just feels oddly inside-out, and like trying to pound a square peg into a round hole. This whole trying to use GPU passthrough feature, while great, is actually pretty difficult for the unwashed masses (like me) to implement and I think it really limits the product's market appeal from it's true potential. Therefore, I'm going to suggest three improvements in increasing breadth, starting with a minor tweak and culminating on a suggestion for basically a new product to sell along side Unraid server. A bit of background. I'm a primarily Windows developer, I've been programming professionally since before Windows. Yeah, I'm kinda old. For over a decade now I've been (mostly) happily using Vmware Workstation to virtualize windows guests on a windows host. This has delivered a lot of convenience, allowing me to isolate my dev & test environments, etc. And, crucially, protect my IP by not allowing secured guest VMs to access the internet while still being able to access lan resources (primarily lan file server). However, as programming evolves, I've increasingly needed access to a full GPU. Unfortunately Workstation has become something of a backwater product for Vmware as they chased the cloud, and they're unlikely to provide real DX12 shader program access from within a guest anytime soon. The product has been stuck at DX9 level acceleration + some fake software emulation since like 2014. So I haven't been able to do work in Unreal Engine, nor anything else requiring more than basic graphics for quite some time. This has left me in an ugly multi-boot / multi-box / KVM switch environment I've wanted to move beyond for a long tme. Thus my interest in Unraid. Idea 1; My immediate need is to set up unraid so I can work in a 'software assured' environment where my (and my clients) IP can't just slip out the net due to some phishing scam email, shareware app that self-update installs a back-door, etc. So I've gotten unraid to boot, auto-start a pass-through GPU & SSD VM, and gotten that working pretty well. However I need to partition the VM from the WAN but still access the LAN. I originally intended to install pfSense since that seems the typical route people are going, so I installed a 2nd NIC. For whatever reason stubbing that 2nd NIC broke unraid networking somehow (never figured that out) but anyway I'd prefer something lighter. It seems like the iptables routing capability built into unraid should be sufficient for my simple needs, so I'm trying to use that with mixed effect. It's been a long road but I'm pretty close to getting that working (with the help of @bonienl, thanks so much!) but sitting here thinking about it, really all I need instead of a 2nd nic and dealing with br1 isolation is a virtual bridge network that's the converse of virbr0 - i.e. instad of being a wan-only bridge I need a lan-only bridge. So my suggestion is to simply add a lanbr0 to the existing product and allow VM's to bind their virtio network adapter to it. God that would have made my life easier! Idea 2: So people want to virtualize windows. But this is a steep learning curve for us windows-weenies. But we are a very large addressable market, and there is a serious need for a product that makes windows more secure. I think the following product could sell well if properly marketed. Redesign unraid (probably new product) so that it can 1. run completely from a usb flash device, probably locally encrypted, create no HDD partitions . 2. boot, load unraid + kvm, 3. load whatever the default windows OS on the HDD into a bare-metal KVM sort of like how @SpaceInvaderOne does with his dual-"boot windows bare-iron and within a VM" youtube video, 4. pass-through all hardware devices EXCEPT the NIC(s), network access would be instead supplied by the virt-io bridge. This would allow all sorts of opportunities to better manage the network access, insert network monitors, firewalls, etc and ideally a complete network security layer under windows. Crucially, something needs to be done to wound windows so bypassing this security and simply booting windows natively again doesn't bypass this new security layer. No, I haven't fully thought this part out yet. Idea 3: This running a NAS on my workstation, taking over the screen, keyboard & mouse, it's as great as it is problematic. Getting dropped into the unraid GUI, losing the display once GPU pass-through, it's just unforgiving without multiple sets of keyboards, mice, & screens or at least a KVM switch. I've kluged my dell monitor, which supports super basic KVM switch ability, but even now it's pretty esoteric by mortal human standards. Yeah I know you linux gurus are laughing at me... So, I think lime tech should come out with an entirely new product, one aimed at workstation use. Call it Unraid Workstation. This product might ditch (or depreciate) some of the NAS features but add a real linux desktop. It would adopt the Looking Glass project and help get it out of beta. It would then enable GPU virtualization while sharing the keyboard/mouse similar to how I do it in Vmware Workstation, but better (with full GPU support). Ideally this would work in full-screen mode (as I can do in Workstation) where apps like games can run with little limitation yet when you drag the cursor to the top of the screen a window slides down and you can VM switch as easily as you can task-switch today. Then add in a bunch of Linux goodness, like a firewall better than fpSense. Personally I don't understand why nobody's done a docker firewall. Is everyone waiting for wireguard? But thhe whole thing needs to be turn-key for us non-bearded windows losers. Ok that's a lot of word salad to digest, hope you enjoyed it. Feel free to laugh / cry / etc or even ask me questions if folks want to talk about it. Peace, Dav3 </rant>
  41. 1 point
    Made a simple colorful banner from an old background to share. It seems to scale well if you have a large screen.
  42. 1 point
    Finally got around to this. Had to redo the slackbuild since they changed the source. Updated rar2fs to 1.28.0 and unrar also to 5.8.5
  43. 1 point
    Polite little bump - I'm sure you just forgot about this. # Generated file integrity check schedule: 10 0 * * * /boot/config/plugins/dynamix.file.integrity/integrity-check.sh &> /dev/null
  44. 1 point
    just got some time to try a full system backup of my daily driver vm, then created a new vm, booted off the urbackup recovery media and restored, and voila it worked a treat!, restored vm booted up and everything works as expected yay!. Note if you do try a restore then the machine you are restoring to (whether vm or physical) must have a disk/vdisk equal to or larger than the machine the backup was taken from, this includes used space AND importantly also free disk space, otherwise you wll receive a 'error GPT restore' when you attempted to restore.
  45. 1 point
    Here is a temporary solution until limetech has implemented this feature:
  46. 1 point
    Can you elaborate why this is wrong. I have had it working prior to the update. The ip address of the server is 198.162.0.20 so wouldnt that make 192.168.0.1/24 what I should be typing in that field?
  47. 1 point
    Ok so I just added a path in the Krusader container settings (attached), restarted the container, and that seemed to solve the issue.
  48. 1 point
    thanks @jonathanm, i will consider going ahead with it then, it would include privoxy and i guess that would fill the gap for people who want only a secure proxy and not a torrent client.
  49. 1 point
    We are a bunch of grumpy old bastards that don't like changes
  50. 1 point
    Lol.... Yeah, I missed that line in the OP... Too many beers after work.