Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Tested on 6.9.0-beta25 On a rsync operation between disk1 and a cache pool (my 3rd pool i.e. using 6.9.0 multi-pool feature), I noticed strange load on core 40 and 56 so investigate. 1. Check top 10 processes CPU usage ps aux | sort -nrk 3,3 | head -n 10 Output (removed irrelevant entries - I checked each of the other entries and none uses core 40 and 56) ... root 11912 15.6 0.0 0 0 ? S 12:06 34:10 [unraidd1] root 1804 5.5 0.0 0 0 ? S 12:06 12:15 [kswapd2] ... 2. Check which core unraidd1 uses ps -aeF | grep unraid Output (i.e. unraidd1 uses core 56) root 11911 2 0 0 0 36 12:06 ? 00:00:00 [unraidd0] root 11912 2 15 0 0 56 12:06 ? 00:35:49 [unraidd1] 3. Check which core kswapd2 uses ps -aeF | grep kswap Output (i.e. kswapd2 uses core 40) root 1803 2 0 0 0 32 12:06 ? 00:02:10 [kswapd0] root 1804 2 5 0 0 40 12:06 ? 00:12:55 [kswapd2] 4. append in syslinux append isolcpus=32-63 nohz_full=32-63 rcu_nocs=32-63 kvm_amd.avic=1 mitigations=off pcie_acs_override=downstream,multifunction Given the name unraidd1, I'm guessing it's a process spawned by Unraid. And d1 is perhaps disk1 and the sync operation is from disk1. So conclusion is unraid spawns a process without considering isolcpus. kswapd0 manages swap space so I'm assuming kswapd2 is the same? Thing is I don't use the swap memory plugin or turn on any kind of virtual memory / swap space settings so I'm a little surprised to see it putting load on the CPU + I only use 67GB out of 96GB RAM so I don't see why swap space is triggered. The load on both processes are not that high but high enough to cause some lag while gaming. (PS: this is unrelated to the btrfs not respecting isolcpus bug I raised previously)
  2. There's a guide on technicalramblings with the nginx config. You don't need the Wordpress docker. You need the LSIO Letsencrypt docker (which includes nginx) and mariadb (for the database) and download the wordpress download directly from wordpress.org https://technicalramblings.com/blog/how-to-set-up-a-wordpress-site-with-letsencrypt-and-mariadb-on-unraid/
  3. So I have a pool of 4 SSD's running in RAID5 for data chunks. It defaults to RAID1 for System and Metadata chunks. Since btrfs device allows RAID10 for 4 devices, is there any harm to change metadata (and system) chunks to RAID10 instead of RAID1? Reason: theoretically faster while same protection against single failure.
  4. A "dotnet" process was killed due to OOM which I think is used by qbittorrent. Could be memory leak or could just be naturally out of memory error as you only have 8GB of RAM. You can try restricting memory usage of some dockers (edit the docker, Advanced View and use --memory=# parameter in Extra Parameters box e.g. --memory=1G to limit memory usage to 1G) and see if it improves things.
  5. @steini84: do you notice that ZFS uses isolated cores under heavy IO? Maybe placebo effect but it even looks like the spawned zfs processes prefer to use isolated cores.
  6. So with Unraid 6.9.0 multi-pool functionality, I'm planning to create 2 SATA RAID pools, 1 with 3x SSD and 1 with 4x SSD. These pools will form the core of my NAS storage needs (i.e. not using the array anymore). That leaves a single free SATA port with a few potential candidates. A Bluray drive A 10TB HDD An old 128GB SSD Moreover, Unraid requires at least 1 disk in the array (kinda archaic given 6.9.0 but that's a different story for a different time) so that sort of complicates things a little. What do you think I should do? Use the old 128GB SSD in disk 1 via the last SATA port. This means I would be optical-less and spinner-less. My main concern is that I would suddenly need to use the Bluray for some unknown future reasons (aka "don't know what you got till it's gone"). Use the 10TB HDD in disk 1 (which it is currently). This means no optical (so same DKWYGTIG concern) but not spinner-less. This is sort of a "if it aint broken, don't fix it" option but the least favourite in my mind as (a) I'm keen on going fully solid state and (b) I won't be using the array so the HDD is redundant. Keep the optical drive on SATA + use a SATA-USB adapter for the old 128GB SSD. This seems to be the most flexible option but I have reservation over using USB in the array but then I won't even be using the array so maybe that's fine? Keep the optical drive on SATA + use a spare USB stick as disk 1. Same reservation over using USB in the array.
  7. The 1 core for Unraid advice was based on assumption that you run 1 parity, no encryption, no docker, no plugin. Are you running dual parity and/or encryption? Another potential reason is that you have some underlying issues with your disk(s). That can manifest as high IO wait, which shows up as high core load (as the core can't do anything while waiting for IO). This affects all SATA devices but particularly more perceivable with HDD so perhaps something you might want to have a look. If you have an NVMe SSD that you can pass-through to the VM as a PCIe device then that will also help as the IO over that NVMe doesn't go through Unraid and thus isn't quite affected by Unraid IO load. Have a read through the topic meep quoted as it has some useful information there e.g. emulator pin so you can tune things a little more as well but if the core loads 100% then there will be lag. Lag under heavy IO is a more or a less and not a yes or no thing. With regards to how many cores for your VM vs Unraid, remember the config with the best performance isn't necessarily one with the most consistent performance. With gaming, you generally prefer consistent performance so having fewer cores assigned to your VM isn't necessarily a bad thing.
  8. There isn't any audio player docker for Unraid and even if so I don't think it work since there isn't any audio device driver nor any way to control it. Even your VM solution may run into problem of passing through the onboard audio device to a VM, which isn't always possible. Some external USB soundcard also doesn't like being attached to a VM via libvirt so you might need to pass through a USB controller, again may or may not be possible. It probably will save you time and effort to just get something like a Chromecast Audio. Last I had one, I was able to cast Plex to the Chromecast Audio hooked to a set of speaker.
  9. Did you try rebooting the server? The RX570 has reset issue so all the problems could just be it wasn't reset properly. Also considering the 9600K has an IGPU, you probably want to set your motherboard to boot with the iGPU and then dump your own vbios for the RX570. That may (or may not) help with the reset issue. Changing machine type needs a brand new template from scratch. Changing it on the GUI isn't gonna work due to the complex changes from PCI to PCIe.
  10. What do you use your daily PC for? Without knowing that it's hard to assess the cons of using VM. Personally, I can see the cons with gaming under a VM but I really don't quite see any major cons with a "daily" VM. In terms of running Unraid as a VM in Windows, I attempted exactly that and reported the findings in my build log. Have a look to see if it would help you. The fact that I gave it a serious look but abandoned the idea is the TL;DR.
  11. If you just want a fast pool, you don't quite need ZFS. 6.9.0 + btrfs cache pool work just as well. You might be mixing up the Unraid array with the (cache) pool. The pool runs RAID and has no performance limitation.
  12. While things may change, I really don't expect LT to implement ZFS in 6.9.0 due to a few factors: Has the question surrounding zfs licensing been answered? It's less of a legal concern for an enthusiastic user to compile zfs with Unraid kernel and share it. Most businesses need to get proper (and expensive) legal advice to assess this sort of stuff. ZFS would count as a new filesystem and I could be wrong but I vaguely remember the last time a new filesystem was implemented was from 5.x to 6.x with XFS replacing ReiserFS. So it wasn't just a major release but a new version number all together. At the very least, 6.9.0 beta has gone quite far along that adding ZFS would risk destabilising and delaying the release (which is kinda already overdue anyway as kernel 5.x was supposed to be out with Unraid 6.8 - so overdue that LT has made the unprecedented move of doing public beta instead of only releasing RC) So TL;DR: you are better off with the ZFS plugin (or custom-built Unraid kernel with zfs baked in) if you need ZFS now. Other than the minor annoyance of needing to use the CLI to monitor my pool free space and health, there isn't really any particular issue that I have seen so far, including when I attempted a mocked failure-and-recovery event (the "magic" of just unplugging the SSD 😅)
  13. @limetech: given we now have multiple-pool, would it be possible to eliminate the requirement to have a device in the array? Even the 6.9.0-beta GUI now disables array-related attributes if cache = only.
  14. Changes to the xml: Change vendor ID to dummy value instead of none as none sometimes doesn't work. KVM hidden state on. Group all the 4 functions of the graphics card to the same bus with multi-function on (i.e. mirror what the device actually is). There is this line in the qemu log which makes me think perhaps wrong vbios. 2020-07-25T21:37:58.007304Z qemu-system-x86_64: -device vfio-pci,host=0000:08:00.0,id=hostdev0,bus=pci.2,addr=0x0,romfile=/mnt/cache/domains/vbios/TU116_edited.rom: Failed to mmap 0000:08:00.0 BAR 3. Performance may be slow A few more things for you to try: Tools -> System Devices -> tick all 4 functions under 08:00 (i.e. your entire graphics card). Note: since you only have a single graphics card, you will lose Unraid display if you do this. Try a different vbios. Note: only use xml mode to edit this line: "<rom file='/mnt/user/domains/vbios/TU116_edited.rom'/>". Don't use the GUI to edit as it will undo the multifunction change. Boot Unraid in legacy mode Add a 2nd low-end graphics card for Unraid to boot with.
  15. Try this new xml. <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>HTPC</name> <uuid>7d488047-bee1-0b58-04d5-e9481ef7c018</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>12582912</memory> <currentMemory unit='KiB'>12582912</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='7'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='9'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='10'/> <vcpupin vcpu='6' cpuset='5'/> <vcpupin vcpu='7' cpuset='11'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/7d488047-bee1-0b58-04d5-e9481ef7c018_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='a0123456789b'/> </hyperv> <kvm> <hidden state='on'/> </kvm> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='9' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:ae:83:02'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x09' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/domains/vbios/TU116_edited.rom'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x08' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x3'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain>
  16. A bit pedantic but Unraid has 2 concepts that you sort of mixed up. The "array" is basically data aggregation (like linux unionfs / mergerfs or windows DrivePool) with additional parity logic built on top. That's probably what you refered to as "filesystem pool". No trim on array and write speed is limited with potential (albeit rare) parity errors with SSDs. No RAID but allow mixed size drives (as long as parity is the largest). The "pool" aka "cache pool" or with 6.9.0 "fast pools" is basically RAID pool. It used to be used exclusively as write cache but that is archaic. The pool is basically used for anything that you want speed. It uses btrfs RAID (or optionally xfs for single-drive pool) so it restricts the sort of mixed-size drives you can use (e.g. 1TB+1TB+2TB RAID-1 is fine but RAID-0 will "lose" 1TB capacity). There must be at least 1 device in the array. To run everything in the pool, some have had success with putting a spare USB stick (i.e DIFFERENT from the Unraid boot stick) in the array just to get over the requirement. With SSD's, you are probably better off with them in the pool (and with 6.9.0, currently in beta, you can even have multiple pools). I recommend you to either use 6.9.0-beta25 or wait till 6.9.0-rc1 (or wait till 6.9.0 stable, which has some way to go) as it works better with SSD's (mainly reducing excessive write and improving performance). Snapshots, there are 2 main ways to do it, both not built-in. BTRFS can do snapshot natively. You can refer to the topic below with some sample scripts and methodologies by our btrfs guru johnnie.black. No 3rd-party software needed. Use ZFS snapshot - I use this for my 2 less important VM's. (My most important VM has all the NVMe's passed through for best performance). Need ZFS, which is a 3rd-party piece of software, which you need to install either with the ZFS plugin or use a custom-built kernel with ZFS already baked in (e.g. ICH777) Quote:
  17. The G5400 is fine for your usage without encryption. Last I tested encryption (and dual parity), they had a rather drastic impact on CPU load. Things may have improved by now but I still would caution against using low power CPU with encryption. You may think outside of the box a bit and perhaps consider like the Ryzen 2200G. Of course you will need to change motherboard choices etc so it can be quite daunting. Running half-channel RAM will half its speed. Whether it's small medium big huge is rather "it depends". To be honest, I don't think it will be perceptible given your states use case so if it's in your upgrade plan to add another stick then it's probably ok. You should still download the motherboard manual from the manufacturer website to read up, in case there's any special provisions if you only use single DIMM.
  18. You are better off posting a separate topic with your specific details.
  19. There isn't really a best practice but rather depends on what you need. The UD script is probably the most simple without any bells and whistles. I use sync + zfs snapshots. Some use duplicati.
  20. Some pointers Your CPU doesn't have integrated graphics so you are assuming the ability to boot headless, which cannot be assumed with consumer motherboards. You are better off with a CPU with iGPU. Cost cutting here is very risky. The DS380 case is a known notoriety due to its poor design that basically has no airflow over the drives. You will have to do some small mods to make it work (just google it) so keep that in mind. The smallest ITX case I trust is the Fractal Design Node 304. I think the Windows line means it supports 64-bit, not because it doesn't work with Linux. My X399 Designare has the same Windows line. Unraid does NOT require ECC RAM. If you have to choose paying for ECC vs paying for iGPU, the iGPU wins every time. ECC is a good to have. iGPU may be the diff between even being able to boot. Are you running 2 sticks or 1 stick of RAM? You should always try to populate all RAM channels so if you want 8GB then you want 2x4GB. 8GB RAM is sufficient for the use-case you described. Your CPU choice is also fine, except for the lack of iGPU as mentioned above.
  21. You can run RAM at lower clock without needing to swap out the old set so maybe try that before attempting to use the old sticks. You can also run memtest but given it's ECC RAM, it can be unstable yet yields no error.
  22. That's a pretty build. A smacking good job with cable management as well!
  23. Assuming you have 2 separate NICs then creates 2 bridges and have 2 virtual NIC in the VM and configure them.
  24. A RAIDz2 with 12 drives is 10+ times faster than the same with Unraid because ZFS stripes data. It doesn't just apply to throughput but also latency. So 90 seconds (1.5 minutes) -> 15 minutes on Unraid sounds about right so there isn't much more you can do I think.
×
×
  • Create New...