Leaderboard

Popular Content

Showing content with the highest reputation on 08/15/21 in all areas

  1. I'll add it back, don't want to get between anyone and their WINE
    4 points
  2. It was nice and fun to be able to put a short saying in there. Additionally, it was always good for a giggle when seeing tags for @constructor.
    3 points
  3. There have been several posts on the forum about VM performance improvements by adjusting CPU pinning and assignments in cases of VMs stuttering on media playback and gaming. I've put together what I think is the best of those ideas. I don't necessarily think this is the total answer, but it has helped me with a particularly latency sensitive VM. Windows VM Configuration You need to have a well configured Windows VM in order to get any improvement with CPU pinning. Have your VM configured as follows: Set machine type to the latest i440fx.. Boot in OVMF and not seaBIOS for Windows 8 and Windows 10. Your GPU must support UEFI boot if you are doing GPU passthrough. Set Hyper-V to 'yes' unless you need it off for Nvidia GPUs. Don't initially assign more that 8 GB of memory and set 'Initial' and 'Max' memory at the same value so memory ballooning is off. Don't assign more than 4 CPUs total. Assign CPUs in pairs to your VM if it supports Hyperthreading. Be sure you are using the latest GPU driver. I have had issues with virtio network drivers newer than 0.1.100 on Windows 7. Try that driver first and then update once your VM is performing properly. Get the best performance you can by adjusting the memory and CPU settings. Don't over provision CPUs and memory. You may find that the performance will decrease. More is not always better. If you have more than 8GB of memory in your unRAID system, I also suggest installing the 'Tips and Tweaks' plugin and setting the 'Disk Cache' settings to the suggested values for VMs. Click the 'Help' button for the suggestions. Also set 'Disable NIC flow control' and 'Disable NIC offload' to 'Yes'. These settings are known to cause VM performance issues in some cases. You can always go back and change them later. Once you have your VM running correctly, you can then adjust CPU pinning to possibly improve the performance. Unless you have your VM configured as above, you will probably be wasting your time with CPU pinning. What is Hyperthreading? Hyper threading is a means to share one CPU core with multiple processes. The architecture of a hyperthread core is a core and two hyperthreads. It looks like this: HT ---- core ---- HT It is not a base core and a HT: core ---- HT When isolating CPUs, the best performance is gained by isolating and assigning both pairs for a VM, not just what some think as the '"core". Why Isolate and Assign CPUs Some VMs suffer from latency because of sharing the hyperthreaded cpus. The method I have described here helps with the latency caused by cpu sharing and context switching between hyperthreads. If you have a VM that is suffering from stuttering or pauses in media playback or gaming, this procedure may help. Don't assign more cpus to a VM that has latency issues. That is generally not the issue. I also don't recommend assigning more than 4 cpus to a VM. I don't know why any VM needs that kind of horsepower. In my case I have a Xeon 4 core processor with Hyperthreading. The CPU layout is: 0,4 1,5 2,6 3,7 The Hyperthread pairs are (0,4) (1,5) (2,6) and (3,7). This means that one core is used for two Hyperthreads. When assigning CPUs to a high performance VM, CPUs should be assigned in Hyperthread pairs. I isolated some CPUs to be used by the VM from Linux with the following in the syslinux configuration on the flash drive: append isolcpus=2,3,6,7 initrd=/bzroot This tells Linux that the physical CPUs 2,3,6 and 7 are not to be managed or used by Linux. There is an additional setting for vcpus called 'emulatorpin'. The 'emulatorpin' entry puts the emulator tasks on other CPUs and off the VM CPUs. I then assigned the isolated CPUs to my VM and added the 'emulatorpin': <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> <emulatorpin cpuset='0,4'/> </cputune> What ends up happening is that the 4 logical CPUs (2,3,6,7) are not used by Linux but are available to assign to VMs. I then assigned them to the VM and pinned emulator tasks to CPUs (0,4). This is the first CPU pair. Linux tends to favor the low numbered CPUs. Make your CPU assignments in the VM editor and then edit the xml and add the emulatorpin assignment. Don't change any other CPU settings in the xml. I've seen recommendations to change the topology: <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> Don't make any changes to this setting. The VM manager does it appropriately. There is no advantage in making changes and it can cause problems like a VM that crashes. This has greatly improved the performance of my Windows 7 Media Center VM serving Media Center Extenders. I am not a KVM expert and this may not be the best way to do this, but in reading some forum posts and searching the internet, this is the best I've found so far. I would like to see LT offer some performance tuning settings in the VM manager that would help with these settings to improve performance in a VM without all the gyrations I've done here to get the performance I need in my VM. They could at least offer some 'emulatorpin' settings. Note: I still see confusion about physical CPUs, vcpus, and hyperthreaded pairs. CPU pairs like 3,7 are two threads that share a core. It is not a core with a hyperthread. When isolating and assigning CPUs to a VM, do it in pairs. Don't isolate and assign one (3) and not its pair (7) unless you don't assign 7 to any other VM. This is not going to give you what you want. vcpus are relative to the VM only. You don't isolate vcpus, you isolate physical CPUs that are then assigned to VM vcpus.
    1 point
  4. Overview: Support for the Unbound Docker Container Docker: https://hub.docker.com/r/kutzilla/unbound GitHub: https://github.com/kutzilla/unbound-docker This is an unofficial Docker implementation of Unbound. It was build to run Unbound on your Unraid machine. Unbound is a validating, recursive, and caching DNS resolver. It can be used to create your own recursive DNS-Server at home. You can use Unbound for services such as Pi-Hole or you can create custom DNS Records for your local network. Here is a tutorial how to configure Pi-Hole with Unbound (Not exclusively on Unraid):
    1 point
  5. Are you thinking about getting a new Unraid server? Not sure if you should buy or build it? Unsure on whether to go Intel or AMD? Do you need help sizing for your needs? Well, then this is the episode for you!   Join Jonathan Panozzo, CSO of Lime Technology, for a deep discussion on how to pick the right hardware for Unraid to best fit your needs. This episode goes into learning about the difference between real and non-real-time applications, how to size for VMs, gaming, and much more! We hope you follow/subscribe on your preferred podcast app to stay tuned on new episodes of the Uncast!
    1 point
  6. Enable user shares in the shares settings. Its default is off for 6.10.
    1 point
  7. Unfortunately yes there are. Having this setting enabled gives the docker the ability to execute 32bit commands.
    1 point
  8. Should have asked before doing anything. Parity has none of your data. Non-correcting parity check doesn't change any disk. Correcting parity check, or even parity rebuild, only writes parity, and will not affect any of your data disks. None of your attachments are working for some reason (I think I have seen reports that "drag and drop" isn't working at the moment). Attach them to your NEXT post in this thread and wait on further advice.
    1 point
  9. As I understand it: Go to 'Edit' of the container. Look at this: The portion circled in a Red is the Unraid file directory that Krusader can work within. The one circled in Green is what that path is called inside the Docker Container. (Think that the Docker container is a VM that only runs Krusader. A defined 'Host container path' is the only way that VM can access anything outside of the VM environment.) PS: I don't believe this is the default Host container path. I seem to recall that it is /mnt/user I modified it so that I could work on the disk shares BUT I understand the issue of data corruption if one uses Krusader (or any other file manager) to move files between disk shares and user shares!!! By the way, you can add in a 'Host Path 3:' to point to another place in the Unraid file structure...
    1 point
  10. Zwei Ideen: A) In Windows eine virtuelle Maschine erstellen und darin unRAID laufen lassen. B) Den unRAID USB Stick nur bei Bedarf in den PC stecken und davon booten. Natürlich solltest du dann auf keinen Fall die Windows Platte in unRAID zb dem Array oder Pool zuweisen. Du darfst sie aber zb über Unassigned Devices mounten und Dateien davon auslesen. Und ja, unRAID benötigt für das Array eine leere Platte.
    1 point
  11. ...der ruf geht an@kanish Gesendet von meinem SM-G960F mit Tapatalk
    1 point
  12. Well... now I feel like a big dummy... I read that part about Usable size but interpreted for something else. I updated my BIOS and now I have 32 GB usable memory. The utilized RAM also doesn't exceed 60% after booting anymore so it seems perfect now. Thanks a lot for pointing me to the trivial answer!
    1 point
  13. I encountered the same error when I tried to purge an old worker. I got the following suggestion from Guy in the Discord channel, which resolved it: 1) Stop Machinaris container in Unraid Admin UI 2) Delete only this file: /mnt/user/appdata/machinaris/machinaris/dbs/machinaris.db 3) Start Machinaris container in Unraid Admin UI 4) Wait about ~3-5 minutes for everything to start and get re-populated. Then try clicking around in Machinaris WebUI.
    1 point
  14. Love the dashboard but it would be nice if we could hide specific Docker apps and VMs. Don't really need quick access to every app
    1 point
  15. For those interested, it seems like the changes in Version: 6.10.0-rc1 have fixed the issues with losing the KVM/local monitor on the ASRockRack E3C246D4U board (BIOS L2.21A) when the on board GPU is enabled for hardware encoding. Just was watching the bootup in the KVM on the upgrade reboot and my jaw dropped when I saw a login prompt rather than the screen go black at the end of the boot process!
    1 point
  16. You do that by editing the docker template page. Goto the DOCKER tab, click on Gitlab-CE and select "Edit" Locate the field labeled Repository. It should be set to gitlab/gitlab-ce:latest This is what controls what version of Gitlab-CE you are using. A quick breakdown of the components of this string is: gitlab: user on dockerhub gitlab-ce: name of the repository latest: version To change it to something else, first find out what the official tags are from the dockerhub page for that project here. Click on the Tags tab and search for the version that you want. For example, 13.9.2, would correspond to 13.9.2-ce.0 Now enter that version instead of latest in the Repository field of the Gitlab-CE docker template: Repository: gitlab/gitlab-ce:13.9.2-ce.0 Hope that helps!
    1 point
  17. You can safely stop it and read the data that has been copied so far. If you set the target to /dev/sdX, it's as simple as mounting it like normal. You may need to reboot or reconnect the drive for UD to see the new partition table. You may also need to run a checkdisk on it from a Windows PC if it won't mount properly. Just note that if you did not specify a mapfile, you can't resume the copy process if you wish to attempt to copy the bad sectors later. It will have to start over from the beginning.
    1 point
  18. I really wish you would not remove CONFIG_X86_X32. I fought to get this added back to unraid in 2017 so dockers that require 32bit support could still run. Removing this will break any docker that requires 32bit support. Thanks, Chris
    1 point
  19. Does anyone use the Homeassitant docker from the linuxserver repo? It is good?
    1 point
  20. When trying to connect to UNRAID via ssh, after entering my password, I would keep getting the error that I have to use a version of netcat (nc) that supports the -U option. Well, with the help of a Slackware 14.2 VM and the build instructions from @kode54 I was able to build the necessary components and install them. (I couldn't get it to compile on the UNRAID server itself despite installing the Dev Pack.) Anyway, here are the two pre-compiled packages in case you want them. They work on my 6.8.3 version of UNRAID. libbsd-0.10.0-x86_64-1_SBo.tgznetcat-openbsd-1.217_1-x86_64-1_SBo.tgz Rename your nc in /usr/bin/ to something else before installing the new one. Install using: installpkg libbsd-0.10.0-x86_64-1_SBo.tgz installpkg netcat-openbsd-1.217_1-x86_64-1_SBo.tgz
    1 point
  21. All, I encountered this problem for the first time today across most of my PCs, and the fixes in this thread did not work. What did work was adding a line to the [global] section of my unRAID smb extra configuration settings: client min protocol = SMB2 To be clear for those who are less versed in smb settings, this is in addition to a line I already had in there for the server, which is simply min protocol = SMB2. Note, I assume this change will break any SMB1 clients trying to connect, but hopefully those are increasingly few and far between. That said, I have not turned off the SMB1 client in my Windows features, though I assume it will no longer be necessary for accessing unRAID. I might test that later this week and report back with an update.
    1 point
  22. First you should use the mdX device so that parity remains valid, so if it's disk1 start the array in maintenance mode and use md1, second remove -n (no modify flag) from the command, so it should be: xfs_repair -v /dev/md1
    1 point
  23. Thats interesting. I did some testing a few days ago pinning hyperthreaded pairs then pinning none hyperthreaded pairs and running passmark cpu benchmark software afterwards this is my results using a 12 core xeon @ 2.4ghz Using 8 threads or 4 cpu cores firstly 8 threads with paired hyperthreads the passmark score is 7256 next 8 threads all from non paired separate so one thread from different core passmark score is 10417 I as the threads were all on separate cores the speed was faster but i guess wouldnt have been if the other thread on each core was being used by another process. I know this is a bit off topic but thought it interesting as i never expected this result
    1 point