Jump to content
We're Hiring! Full Stack Developer ×

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Not particularly with typical Unraid uses. It's more relevant for small files but most media files are in the gig size.
  2. Updates: Attempted to update BIOS to F12i only to waste 2 hours of my life as the new BIOS is buggy and the patch note is misleading. Couldn't reliably save config. When config could be saved, it didn't retain beyond 1-2 boot cycles. Exit without saving = can't boot up at all (need to clear CMOS for it to boot back up). Saving profile crashed the BIOS itself (blank screen). Patch note said "PCIe bifurcation" as additional feature, which is misleading. It really just changes the wording of the BIOS. This feature was already available for a while e.g. F12e, just under different name. So in the process of restoring back to F12e (fortunately I kept both original Gigabyte BIOS as well as my tweaked one), I discovered Global C State Control is critical (at least on 5.x kernel) for performance. With it disabled, CDM random performance dropped by 75%! It was not just benchmark, the lag was obvious, albeit not completely unusable like the ACS Override lag issue. On that subject, perhaps the 2 issues were related. The newer kernel just mitigates the situation to some extent. Also turned on Precision Boost and Typical Current Idle to see if it stablises things. Did some additional tweaks to the workstation VM in the <features> section HyperV. All of these offer even more performance, from 1% to 5%. The vendor_id is a error code 43 workaround (even though I don't have the issue). The value must be exactly 12 characters for it to work. <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <reset state='on'/> <vendor_id state='on' value='0123456789ab'/> <frequencies state='on'/> </hyperv> KVM - error code 43 workaround <kvm> <hidden state='on'/> </kvm> IOAPIC - apparently Q35 on qemu 4.0 had some changes that requires this line - not that I had any issue that prompted it. <ioapic driver='kvm'/> Source: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#QEMU_4.0:_Unable_to_load_graphics_drivers/BSOD/Graphics_stutter_after_driver_install_using_Q35
  3. A few points: Any particular reason why you are still on 6.6.6? You should update to the latest Unraid unless there's a compelling reason not to. If 6.7.2 or 6.8.0 (when it gets released as stable) doesn't help, you might want to wait for 6.9.0-rc1 (which LT said should come out rather quickly after 6.8.0 stable) since it has 5.x kernel which, in my case, helps stabilise ACS Override. Your syslinux is a mash up of various things. You should: Remove iommu=pt Remove iommu=1 Remove pcie_acs_override=id:1912:0014 Add pcie_acs_override=downstream,multifunction Add vfio-pci.ids=1912:0014 For your chipset USB controller, you might want to check BIOS for any funky settings e.g. stuff for external sound card etc. That might be the cause of the instability of the other port. Now I know you said multifunction doesn't work but having the card in the same IOMMU as other critical devices definitely won't work. We can try tweaking the xml instead. Can you attach the xml of the troublesome machine. If copy-pasting, please use the code functionality (the </> button next to the smiley button).
  4. In short, you can't. In long, it kinda is possible simulate it by having 2 separate sets of dockers and a set of bash script but it's not true isolation.
  5. The first thing that stands out: you need an additional GPU for your MacVM.
  6. Your post need a major correction or people may end up nuking their SSD. CORRECTION: TL;DR - do NOT preclear SSD. Preclearing SSD is not a "need not" but rather a "should not" and it has nothing to do with overheating the SSD. Preclear has 2 main uses - stress test HDD on arrival before adding to the array (due to high early failure rate) and zero out the disk (so parity doesn't have to be recalculated). SSD has lower early failure rate than HDD (due to not having moving parts, think UPS driver dropping your HDD sorta early failure) hence, the need to stress test is reduced. As everyone knows by now, SSD cells have limited write cycles so unnecessary write should be avoided. So based on the above, running preclear will just unnecessarily waste a whole write cycle for no reason really, especially since the SSD is not added to the array (with parity) but rather to the cache pool. Now depending on the controller, pre-clear activities may confuse it and interfere with the wear-leveling and garbage collection algorithm. At best, everything works and you just waste a write cycle. At worst, it can cause WL and GC go crazy causing significant write amplification i.e. it MAY nuke your SSD. So unless there's a very good reason to do it (which I can't see any right now), do NOT preclear SSD. To the OP: There is instruction on how to replace drives from a btrfs cache pool, which I don't think you have followed (or not followed properly). You can probably go have a celebratory beer now cuz what you did could have gone rather wrong. Next time, please wait for advice.
  7. Updates: Thanks to @Jerky_san xml, I made some edit to my main VM so can now evenly distribute RAM across node 0 and 2 without the need of dummy VM's to block out available RAM. I went a bit further than his code to create 4 guest nodes (since my VM span all 4 host nodes) to help make process pinning with Process Lasso much easier - no need to click each core anymore, just click on the NUMA box to select all cores to that NUMA. Codes: This numatune section (after </cputune>) creates 4 guest NUMA nodes (cellid) with strict allocation from host nodes 0 and 2 (nodeset). The 2990WX only has 2 nodes with memory access so cell 2 and 3 are also allocated to host 0 and 2. <numatune> <memory mode='strict' nodeset='0,2'/> <memnode cellid='0' mode='strict' nodeset='0'/> <memnode cellid='1' mode='strict' nodeset='2'/> <memnode cellid='2' mode='strict' nodeset='0'/> <memnode cellid='3' mode='strict' nodeset='2'/> </numatune> This cpu section is where the magic happens. The numa section allocates exact RAM amount to each guest node (which using numatune above, would allocate the same amount + overhead to the appropriate host node). Obviously the total across guest nodes should equal to the total memory allocated to the VM. The cpus tag identifies which guest cores are assigned to which cell ID. I grouped them in the same chiplet arrangement (e.g. 0-5 is in the same chiplet, matched to NUMA node 0 etc). Cores in cell 2 and 3 are from the host nodes without memory controller. However, since numa doesn't allow zero memory, I allocated a token amount of 1GB to each. <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC</model> <topology sockets='1' cores='24' threads='1'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='hypervisor'/> <feature policy='disable' name='svm'/> <feature policy='disable' name='x2apic'/> <numa> <cell id='0' cpus='0-5' memory='25165824' unit='KiB'/> <cell id='1' cpus='6-11' memory='25165824' unit='KiB'/> <cell id='2' cpus='12-17' memory='1048576' unit='KiB'/> <cell id='3' cpus='18-23' memory='1048576' unit='KiB'/> </numa> </cpu> These were not related to the numa allocation but I still added them since they seem to give me a marginal 1-2% improvement in performance. HyperV: <hyperv> <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <reset state='on'/> <vendor_id state='on' value='KVM Hv'/> <frequencies state='on'/> </hyperv> Clock: <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='yes'/> </clock> Jerky_san's original post:
  8. That description of "brutally slow" means nothing. What is the actual transfer speed? From my own experience, only Linus-level stuff can make video rendering reach disk limit. Even 125MB/s (network-limited speed) is 1 gigabit / s <-- that is a ridiculous bit rate.
  9. How did you do it please? My workaround has been to start a dummy VM to reserve X amount of RAM from the node which then would force the actual VM to spread evenly across nodes. Would love to not have to do that. LOL Also with regards to NVMe, try turning on Hyper-V. As I recently found out, Hyper-V off slows down my NVMe SSD so perhaps it would help your case.
  10. Even aka Least Used aka the opposite of Most Free? The current Most Free and High Water allocation methods tend to under-utilise low-capacity drives in a mixed-size config. This tends to prompt a lot of "why is this disk not used" questions. I think the Even allocation method should be the default option instead of High Water. The logic itself is rather simple. Find the disk with non-zero empty space greater than min space with the lowest used space (or, if used space isn't a readily available attribute, highest total [minus] free space).
  11. Tools -> Diagnostics -> attach zip file. Also, have you tried passing through the chipset USB 3.0 controller instead? Saving yourself the trouble with additional USB cards.
  12. I strongly disagree with this statement. The user must first consider what setup suits BEST in terms of DATA PROTECTION. FreeNAS relies on a RAID-like setup, in the sense that data is striped across multiple disks. This means if you have more failed drives than parity, you are guaranteed to lose ALL your data because effectively every single file will be missing a portion of it. Unraid is, as its name suggests, NOT RAID. Each data disk has its own file system and there is no striping (i.e. each file is stored fully only on ONE disk). This means if you have more failed drives than parity, you will only lose SOME of your data (the files actually saved on the failed drives). Each file on the working drives is still a complete file. For the vast majority of users, losing some data is preferable to losing all data. Available storage is a secondary concern because if one does not care about losing all data than one should not even bother with parity, hence, no parity, hence no available storage concern.
  13. Your post is beyond a joke. You are blaming the OS for clearly hardware issues. LimeTech does not manufacture your server nor your USB sticks. Do you blame Windows if your laptop breaks down? No! You blame Lenovo, Asus or whoever MANUFACTURES your laptop! Perhaps it needs to be told in the post-Trump world that screaming loudly doesn't really get anything solved.
  14. Yes. Use the Unassigned Device plugin (can be found on Community Apps). NTFS is supported by the plugin.
  15. Plugins are stored on the USB stick and loaded into RAM at boot. Config files for plugins should also be on the USB stick.
  16. You need to set the share to cache = only but if you follow SpaceInvaderOne guide on Youtube then you would set that up as part of the tutorial so "by default"-ish.
  17. Try appending iommu=pt in your sysconfig. It was reported in the topic you quoted from bugzilla that that was a mitigation.
  18. No need to disable Wifi and USB 3.1. They barely use any resource if at all. In fact, on the subject of USB: The X399 chipset has 2x USB 3.0 controllers that can be passed through to your VM's. At the back of the mobo you will see 2 groups of 4x USB 3.0 ports that look to be obviously "together", so to speak, - those are the ones. Even though you are not hot-plugging devices, I would suggest passing each controller to your gaming and streaming VM's (unless your video editing VM has issues with USB ports - someone reported on here about his external sound card errors out when connected the normal way i.e. not through passed-through USB controller). This is especially important for the gaming VM. Passed-through USB controller has the lowest (albeit theoretical) latency regardless of devices. The USB 3.1 controller cannot be passed through even with ACS Override. I have not had any success and have not seen any success story. IIRC, the USB 2.0 and internal ports are also connected to the 3.1 controller. Connect your Unraid USB stick to the USB 2.0 port (you can get a internal to USB 2.0 adapter.
  19. Ok, in that case, you will need to note the below: You will certainly need ACS Override for it to work since I'm pretty sure the PCIe x4 slot is connected to the chipset so it's in the same IOMMU group as your LAN, Wifi etc. The x8 and M.2 slots tend to be together in the same group too so need ACS Override to break them out. Since you are passing through a 1080Ti as primary (i.e. what Unraid booted with), also expect potential run in with error code 43. You can try to mitigate this by booting Unraid in legacy mode, vfio stubbing the GPU, turning off Hyper-V in the VM template and dump your own vbios and hopefully all is well. Based on your plan, I'm assuming you are doing 2990WX, for which your core assignment isn't ideal. The 2990WX has 32 physical cores broken down into 4 chiplets (each chiplet has 2 CCX, each CCX has 4 cores), only 2 of the chiplets have PCIe and memory connection. So your main gaming PC should use exactly 8 cores (1 full chiplet) that connect directly to your GPU PCIe slot for lowest latency. Anything else using the same chiplet while you game will introduce some variance to your frame rate, which can vary from unnoticeable to freaking annoying. The remaining chiplet with PCIe connection needs 1 core reserved for Unraid (core 0 is almost always on chiplet 0, which almost always be the one having PCIe connection). So you are left with 7 cores with PCIe connection to split between your streaming and video-editing. I would suggest you give 3 cores (from a single CCX) to your video editing and the remaining 4 cores (1 full CCX) to the streaming. Alternatively, you can have 6 physical cores assigned to the streaming VM (evenly spread across 2 CCX) and the remaining 1 to the video editing VM. The latency is less noticeable with video editing than streaming (here I'm assuming you are doing live streaming on Twitch / Youtube). The remaining chiplet cores (without PCIe and memory connection) can be split across VMs as you see fit but for best performance (especially for the streaming VM), you don't want to split chiplet across multiple VM's and within the same chiplet, you would want to split your cores as evenly across the multiple CCX as possible (to even out the load). Need pictures once you are done with the custom loop. Would be a nice build.
  20. You are on the bleeding edge of tech, mate. Highly unlikely anyone on here is that fast. But just in general terms, I would go for whichever brand that allows me to pick any arbitrary PCIe slot as primary GPU (i.e. what Unraid boots with). TR4 (and Ryzen in general) IOMMU has been generally good across the brands (and failing that ACS Override to a certain extent) so it is more important to try to avoid error code 43 (assuming you are passing through Nvidia GPU).
  21. Not John but doesn't take John to notice it's not viable right off the bat. You need single-slot width cards for the middle 2 GPU and I don't remember any single-slot 1080Ti widely available (if at all outside of China). The card in your illustration are Zotac Mini which definitely is dual-slot. (not wanting to sound harsh but if you don't know why dual-slot cards would not work in your config then you might want to watch a few more Youtube videos on PC building) Also, you don't need 1080Ti for streaming nor video editing. Unless there are separate people working for you for video editing and streaming, there's also no reason why you need separate VM's for video editing and streaming. I can see merit in having a separate gaming VM for Threadripper due to NUMA nodes but certainly not streaming / video editing.
  22. If powering 3.5" HDD then I would use the MOLEX -> SATA since I don't want to overdraw current on a single SATA cable (which can happen if multiple HDD spins up simultaneously). If powering 2.5" SSD then it doesn't quite matter due to relatively low power draw of SSD but I would still use MOLEX -> SATA just to be safe. The quality of the cable, in contrast, is a bigger concern. Dollar-store stuff is highly not recommended.
×
×
  • Create New...