Leaderboard

Popular Content

Showing content with the highest reputation on 03/26/20 in all areas

  1. Some considerations on using the BOINC docker for Rosetta@Home. Performance and Memory concerns. BOINC defaults to using 100% of the CPUs. Also, by default, Rosetta will process 1 task per cpu core/thread. So if you have an 8 core machine (16 with HT) it will attempt to process 16 tasks at once. Even if you set docker pinning to specific cores, the docker image will see all available cores and begin 1 task per core/thread. If you want to limit the number of tasks being processed, change the setting for using % of CPUs. So using the 8 core machine example above, setting it to 50% would process 8 tasks at a time. Regardless of how you set up CPU pinning. RAM and out of memory errors. Some of the Rosetta jobs can consume a lot of RAM. I have noticed individual tasks consuming anywhere between 500Mb-1.5Gb of RAM. You can find the memory a task is using by selecting the task, and clicking the properties button. If the task runs out of memory, it may get killed, wasting the work done or delaying it. It is helpful to balance the number of tasks you have running to the amount of RAM you have available. In the example machine above, if I am processing 8 tasks, I might expect the RAM usage to be anywhere from 4Gb to 10Gb. The docker FAQ has instructions on limiting the amount of memory the docker container uses, but be aware that processing too many tasks and running out of memory will just kill them and delay processing. My real world example. CPU: 3900X 12-core (24 w/ Hyperthreading) RAM 32GB Usage limit set to 50%, so processing only 12 tasks at a time. RAM limited to 14G, I could go a little higher, but havent needed to. Most tasks stay under 1Gb CPU pinning to almost all available cores Actual CPU usage looks like Since putting those restrictions on, I have had very stable processing and no out of memory errors.
    2 points
  2. Nextcloud based on the official docker hub image. Nextcloud with full Office integration. Based on: https://hub.docker.com/_/nextcloud/ Tag apache (latest) Please make shure you got the volume mounting correct. Please note you can mount any share to for example /mnt/Share and mount it in nextcloud with the "external storage" app. -- PREVIEW: -- DONATE: Please buy me a Pizza > https://www.buymeacoffee.com/maschhoff -- GUIDS: ---- FOLDER RIGHTS To get the right folder/file rights add this to Extra Parameter and Post Execution <ExtraParams>--user 99:100 --sysctl net.ipv4.ip_unprivileged_port_start=0</ExtraParams> <PostArgs>&& docker exec -u 0 NAME_OF_THIS_CONTAINER /bin/sh -c 'echo "umask 000" >> /etc/apache2/envvars'</PostArgs> ExtraParams: --user 99:100 --sysctl net.ipv4.ip_unprivileged_port_start=0 PostArgs: && docker exec -u 0 NAME_OF_THIS_CONTAINER /bin/sh -c 'echo "umask 000" >> /etc/apache2/envvars' ---- REVERSE PROXY For security reasons and to get https you should use a reverse proxy If you want to run it behind a reverse proxy make shure you got your proxy configuration right. As reverse proxy I recommend to use LetsEncrypt with nginx. You will find a lot of example configurations Then add ‘overwriteprotocol’ => ‘https’ to your nextcloud config. You will not be able to access it without your https reverse proxy anymore. ---- SPLIT DNS If you are running nextcloud in your home network it is a good choice to use a split DNS to get the connection directly to your nextcloud. In addition you will need it to get ONLYOFFICE accessable from inside and outside of your net. To get this done you need a reverse proxy configuration that and the right DNS entry: Example for dnsmasq or pihole DNS Server config address=/cloud.mydomain.com/192.168.100.100 address=/cloud.mydomain.com/1a01:308:3c1:f360::35e 192.168.100.100 should be your reverse proxy. You need a proxy configuration that points to "cloud.mydomain.com" passing it to your real nextcloud ip. ---- SECURITY CHECK You can schedule a task that will automaticly check the security level and gives you a push notification if there are any issues. It is based on https://scan.nextcloud.com/ https://github.com/maschhoff/nextcloud_securipy Any questions? Dont mind to ask
    1 point
  3. Overview: Support for Docker image Shinobi Pro Documentation: https://shinobi.video/docs/ Video Guide: Showing how to setup and configure Shinobi Pro. If you want to run Shinobi Pro through a reverse proxy then below is a config file that you can edit. Save it as shinobi.subdomain.conf # make sure that your dns has a cname set for Shinobi server { listen 443 ssl; listen [::]:443 ssl; server_name shinobi.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; proxy_pass http://IP-OF-CONTAINER:8080; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; } } If you appreciate my work, then please consider buying me a beer
    1 point
  4. Support for Firefox docker container Application Name: Firefox Application Site: https://www.mozilla.org/en-US/firefox/ Docker Hub: https://hub.docker.com/r/jlesage/firefox/ Github: https://github.com/jlesage/docker-firefox This container is based on Alpine Linux, meaning that its size is very small. It also has a very nice, mobile-friendly web UI to access Firefox graphical interface and is actively supported! Make sure to look at the complete documentation, available on Github ! Post any questions or issues relating to this docker in this thread.
    1 point
  5. I've tried to set it up myself, but haven't had much luck. Just wondering about other peoples experiences and if anyone can give me any tips. The main issue that I have is that after it installs after the first reboot is that it gets stuck and only a black screen with lines is seen, sometimes it gets stuck on the grub menu. When I changed VNC drivers there was a black screen with a big mouse like others have reported. Tried changing the bios, though that didn't do much.
    1 point
  6. I suspect your older cards won't support NVENC or NVDEC either so even if you could get it to run on unraid, you wouldn't get transcoding via emby/plex/etc
    1 point
  7. Yes. See the docker FAQ https://forums.unraid.net/topic/57181-docker-faq/?do=findComment&comment=566088
    1 point
  8. Here is a port list. https://support.plex.tv/articles/201543147-what-network-ports-do-i-need-to-allow-through-my-firewall/ It's much easier to allow plex to manage them in host mode than to add them all manually.
    1 point
  9. Your Turbo Boost is working normally. 3.8 Ghz is only achieved if only 1 core is maxed out. If 6 or more cores are maxed out 3.3Ghz is the boost speed. http://www.cpu-world.com/CPUs/Xeon/Intel-Xeon E5-2690.html
    1 point
  10. Download Tips and Tweaks.. Enable Intel Turbo/AMD Performance Boost? Also if your doing all core boosts make sure the CPU is rated for all cores..
    1 point
  11. It's not an fs corruption problem, why I asked for diags after array start: Mar 26 09:53:38 unRAID root: mount: /mnt/disk1: unknown filesystem type 'xfs'. This happened one time before, it's like xfs is not installed, don't remember exactly what the problem was, but probably a corrupt Unraid install, take the opportunity to upgrade to v6.8.3 and reboot.
    1 point
  12. Thanks - I see I did miss a bunch of ports. My bad. Busy fiddling. Thx for the quick response
    1 point
  13. Yes because you need to forward all ports that are in the docker template (since i don't own the game i can't test which ports - but also i put a description which port is for what, i think you need at least UDP: 26900, 26901, 27015 and TCP: 26900 - but as i said i would forward all ports from the template with the corresponding protocol). Btw this is game specific and depends on your setting. Nope because a dedicated server can't simply just open some ports on your firewall/router (!!!big Securtiy risk!!!), it's good for hosting on your local network but not for a dedicated server (keep in mind that everyone can connect to this port from outside your network through the internet and that's why you always should do that yourself - just imagine the dedicated server opens a wrong port to a wrong ip, a potential attacker could completely f**k up your whole network all your computers, security cams, firewall,...). Also this question was asked here: https://steamcommunity.com/app/251570/discussions/4/1742229167189666988/
    1 point
  14. No expander, but you can use a single 16 port HBA, e.g.: LSI 9201-16i or 9305-16i
    1 point
  15. What backplane? Does it have a SAS expander? If it's an expander backplane you can connect 1 or 2 miniSAS cables from the HBA to access all slots, if not you need 1 miniSAS cable for every 4 disks, but there are controllers that support 16 (or more) devices.
    1 point
  16. From the console, run "mc" to launch Midnight Commander then navigate to /mnt/user/[share name] to delete files then go to the GUI to delete share (which should now be possible as the share is empty). For a more Explorer-like interface, you can use Dolphin or Krusader dockers. The path to delete will be different depending on how you do the docker mappings. The reason to delete from the GUI is to also delete any redundant smb settings.
    1 point
  17. RAM is always used by Linux (including Unraid which based on Linux) to cache write. If the cached data is still in RAM then it is also used automatically as read cache. There is no other functionality to use RAM as other forms of read/write cache
    1 point
  18. You don't quite need all those super advanced techniques as they only offer marginal improvement (if any). And then you have to take into account some of those tweaks were for older gen CPU (e.g. numa tuning was only required for TR gen 1 + 2) and some were workarounds while waiting for the software to catch up with the hardware (e.g. cpumode tweaks to fake TR as Epyc so cache is used correctly no longer required; use Unraid 6.9.0-beta1 for the latest 5.5.8 kernel which supposedly works better with 3rd-gen Ryzen; compile your own 6.8.3 with 5.5.8 kernel for the same reason, etc.) In terms of "best practice" for gaming VM, I have these "rules of hand" (cuz there are 5 🧐 ) Pick all the VM cores from the same CCX and CCD (i.e. die) would improve fps consistency (i.e. less stutter). Note: this is specific for gaming VM for which maximum performance is less important than consistent performance. For a workstation VM (for which max performance is paramount), VM cores should be spread evenly across as many CCX/CCD as possible, even if it means partially using a CCX/CCD. Isolate the VM cores in syslinux. The 2020 advice is to use isolcpus + nohz_full + rcu_nocs. (the old advice is just use isolcpus). Pin emulator to cores that are NOT the main VM cores. The advanced technique is also pin iothreads. This only applies if you use vdisk / ata-id pass-through. From my own testing, iothread pinning makes no diff with NVMe PCIe pass-through. Do msi fix with msi_util to help with sound issues The advanced technique is to put all devices from the GPU on the same bus with multifunction. To be honest though, I haven't found this to make any diff. Not run parity sync or any heavy IO / cpu activities while gaming. In terms of where you can find these settings 3900X has 12 cores, which is 3x4 -> every 3 cores is a CCX, every 2 CCX is a die (and your 3900X has 2 dies + an IO die). Watch SpaceInvader One tutorial on youtube. Just remember to do what you do with isolcpus to nohz_full + rcu_nocs as well. Watch SpaceInvader One tutorial on youtube. This is an VM xml edit. Watch SpaceInvader One tutorial on youtube. He has a link to download the msi_util. No explanation needed. Note that due to the inherent CCX/CCD design of Ryzen, you can never match Intel single-die CPU when it comes to consistent performance (i.e. less stutter). And this comes from someone currently running an AMD server, not an Intel fanboy. And of course, running VM will always introduce some variability above bare metal.
    1 point
  19. Onboard SATA ports are usually good enough, but there are several Ryzen users that have issues with the onboard SATA controller where it stops responding, mostly if IOMMU is enable, you can try them and if there are issues use the HBA.
    1 point
  20. 1 point
  21. Skitals would you be able to {compile the latest kernel} = {magic} with the patches again for us soon ? Thanks
    1 point
  22. More accurately, it would be that the fstrim command itself thinks that the drive(s) are SSDs. Since you can't do anything about the command itself, what you'd have to do is forget about the plugin, and run the appropriate commands via the user scripts plugin on an appropriate schedule. IE: fstrim /mnt/cache -v Would only trim your cache drive
    1 point
  23. "php7" missing in the command. sudo -u abc php7 /config/www/nextcloud/occ db:convert-filecache-bigint
    1 point
  24. Hi thanks unPaid. I had tried that before but like a moron left the <> in!
    1 point
  25. change this back to: <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> or if your domain directory is different then you need to point it there.
    1 point
  26. Because under certain circumstancea, a container won't be able to see any files stored within a path (those mounted by the unassigned devices plugin) without the slave mode being enabled Sent via telekinesis
    1 point
  27. thanks @jonathanm, i will consider going ahead with it then, it would include privoxy and i guess that would fill the gap for people who want only a secure proxy and not a torrent client.
    1 point
  28. Maybe you could collaborate with @binhex and add a VPN support module.
    1 point
  29. Lol.... Yeah, I missed that line in the OP... Too many beers after work.
    1 point