testdasi

Members
  • Posts

    2812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Unfortunately your only choices are to get a low-end GPU for it to boot or get a new motherboard (with no guarantee that it would work headless). This is what I have been saying for a while. Ability to boot headless is hardware dependent and cannot be assumed.
  2. ... and power. Some GPU won't start with the the appropriate amount of power supplied and/or all the plugs plugged in.
  3. Your motherboard only supports up to 64GB RAM. I think only X570 chipset supports 128GB RAM. If you run your RAM at 2133MHz or 1866 MHz, you may get away with 128GB on X370 it but it's unlikely.
  4. Show me a screenshot of your Syncthing docker setup first Your current screenshot is what is within the docker, which looks pretty dang similar to unRAID but it isn't.
  5. Look on the upper right corner, you will see "Basic View" and a toggle. Click the toggle to show Advanced View and you will be able to change the path to where the isos (including downloaded virtio) will be saved. The iso "share" is just a folder. If it's top level then it's a share. If it's under another share then you need to follow the above to change the path. If you are not too sure what to do (e.g. not sure what path to change it to) then just keep it a share. There is no harm leaving it as-is if it's not broken. PS: Unassigned Devices plugin allows you to mount iso file as network share.
  6. Thermaltake PSU is ok. Actually all the major brands are more or less similar to they all come down to cost, efficiency, wattage and physical size (and RGB!) You can read reviews for PSU (only on the stuff that matters i.e. no RGB-ness assessment) on jonnyguru. A few pointers on your build: The CPU comes with a cooler so there's no need to buy a 3rd party cooler just yet. Just run it with stock cooler and you may find it more than adequate i.e. save $20. If you want a 3rd party cooler, I would recommend to throw in another 10 bucks and get the Cooler Master Hyper 212. It has a cult following as a budget cooler. You can also search on ebay for used case to cut some cost but be mindful of missing screws.
  7. Just as a NAS for 5TB of data, a good budget solution is a commodity RAID box and run it in RAID-1 (i.e. mirror) + 2x 8TB (or 10TB) HDD's. That way you have decent free space, simple setup and data protection. If you want overkill then sure you can do Unraid. Get a Fractal Design 304 + a micro ITX motherboard + some HDD + the usual PC parts. Best is to go on ebay and look for used part. If you need some inspiration, search for user Hoopster on here and see the builds in his signature. Something similar to his backup server should be cost effective.
  8. Is it really a USB device? If so, either attach it to the VM the usual method or pass-through a USB controller and plug it to there. Your X570 mobo should have onboard USB controller that can be stubbed and passed through to a VM.
  9. Update in case someone stumbles on this topic. The dockers will error out. The fix is to go to Settings -> Cpu pinning and repin the cores. If you cores are outside of the range (e.g. you pinned core 15 to the docker but server now only has 12 cores), just clicking Apply will still update the docker even though you don't make any change.
  10. Do you have a backup of your USB stick? If so restoring is as simple as copying the backup back onto a new stick (or old stick if it still works - inability to read can mean a corrupted partition table, not a dead dead stick - but I would not trust it moving forward though). Then follow the procedure for usb stick replacement to get a new license. If you don't then, well, you will need to manually set things back up from scratch. Your data should still be safe (unless you make silly mistakes like assigning a data disk to parity or click "format" without thinking, etc.). And remember to keep backups of your stick moving forward. Last but not least, the upgrade itself didn't brick your stick, it just can't. The stick was already on its last leg. Think of it like you have a car, you take it out for a drive to your in-laws and the engine dies in the process. Do you blame the in-laws *cough* blame the drive to the in-laws *cough* for it? I would but I know it's really not the cause.
  11. It has been a while since I last had a 380 but I remember something about newer AMD drivers not being cooperative with pass through. You might have to install an old version of the AMD driver first through VNC and then pass through the card and keep your fingers crossed. Unfortunately I don't remember which version any more but may be start with something around 2016. If you dumped your own vbios then use it. It can only help.
  12. Maybe I luck into it but my UltraFit has been all well for years across multiple iterations of my home server. Do you use USB 3.0 port? I only plug the stick in USB 2.0 ports. My hypothesis is that USB 3.0 higher speed generates too much heat for these micro sticks. Mine is also the 16GB variety. I think high density of data (particularly the partition tables) makes it easier to corrupt, especially under heat.
  13. So the worms are crawling out of the can. I'm writing this post with Unraid running in VMWare workstation in the background serving Plex from the cloud. Things are still in the process of getting sorted but what I've found so far: I wasted half a day trying and tweaking VirtualBox first but gave up. Storage performance was terrible. 50MB/s dd from within the server. Network performance was terrible. 10-20MB/s! Cannot minimise the VM window to system tray. This feature was requested like a decade ago and still yet not implemented. So next comes VMWare Workstation. I qemu-img convert the cache drive to a vmdk image.. Dockers are all running fine. Storage performance is great (can reach GB/s NVMe). I turned on nested virtualisation and "HVM" is now enabled so theoretically I can run a nested VM but find no need to. My main Linux "Macbuntu" VM is also running on VMWare. Network performance is peculiar (write about 80MB/s so giabit-ish but read is like 500MB/s). Annoyance: Unraid partition alignment sort of messes up Windows ability to make drives offline so I end up with a catch 22. A drive formatted with Unraid can't be attached to the VM (because it can't be made offline properly, to be exact, the partition still mounts) A drive that is formatted with Windows can be attached to the VM but doesn't work with Unraid (once reformatted). Probably will have to do vmdk for my slow storage but we'll see how it goes. My 905p somehow decides to drop latency a further 5ms down to 25ms.
  14. I can recommend the Designare EX. The key benefit is you can pick the middle PCIe slot (the one x16 physical running at PCIe 2.0 x4) as intial display so you can put a single slot low end GPU there for Unraid to boot with. That helps tremendously with passthrough. Shameless plug: have a look at my build below:
  15. Are you talking about the folder that you store your .iso files? Or the iso file mounted by Unassigned Devices? If the former then no, it doesn't have to be a share. There is no option to change it because you just have to move the content of the current iso share to Apps/ISOs and then delete the now empty iso share.
  16. More updates: 4k random Q1T1 average latency in micro seconds (read - write - mixed) 905p Bare metal: 26.54 - 31.03 - 27.20 VM: 59.99 - 64.73 - 63.13 Numa-aligned: 57.92 - 62.35 - 60.40 970 Evo write cache enabled Bare metal: 80.19 - 29.23 - 74.88 VM: 118.66 - 61.50 - 109.67 PM983 Bare metal: 111.90 - 36.39 - 91.22 VM: 133.52 - 60.47 - 114.69 970 Evo write cache disabled Bare metal: 80.63 - 1986.41 - 716.47 VM: 117.98 - 1940.51 - 685.75 Comments We all know latency isn't a strength of the Threadripper platform. Now we can have an idea of the extent. The PCIe access latency floor for Threadripper is about 30us BM and 60us in VM. This can be seen in the write figures. Write that hits low-latency medium (e.g. Optane or DRAM write cache) will be bottlenecked by this latency floor. None of the tuning considered in the previous posts made any difference at all. In fact, turning on FIFO even slows things down! The only tweak that makes a difference to the 905p performance is to align the cores CDM uses to the same numa node that the 905p is connected to. I did this by using Process Lasso. What isn't shown on the data is the consistency of the numa-aligned results. I ran it multiple times and the results are within 0.03us of each other. Non numa-aligned results can vary within about 3us! I can't make up my mind how I should view the BM vs VM diff for the 905p. The overhead is "only" 30us but that is still a 2x performance ratio. At higher queue depths (starting from Q4T1), the BM:VM performance ratio converges to about 1.5x (or 1.3x for numa-aligned VM). Doing all these benchmarks sort of opened a can of worms. I'm now considering running Windows 10 bare metal and Unraid as a VM under a type 2 hypervisor (e.g. VMWare Workstation). I should then be able to use Process Lasso to artificially "pin" the no-RAM numa cores to the Unraid VM, which I probably use mainly for slow storage and dockers. Then perhaps I'll eventually move stuff from Unraid dockers to Windows dockers, leaving Unraid purely as a NAS solution. A rather large can of worms indeed!
  17. Do I need to manually change all the docker pin to core 1 first before I can migrate to a new server, if the new server has fewer cores than my current one? What will happen if I just migrate to the new server that let's say has 12 cores and my docker has core 15 pinned to it?
  18. That would have been right 5-10 years ago but it's not true any more. Even back when RDP didn't work with games, GPU hardware acceleration still worked. You can now play games via RDP on Windows 10 (it was also reported to work with Windows 8 but I don't have any Windows 8 machine to test). Not all games will work. Some games work but buggy e.g. audio issues. Latency is ok but don't expect to play twitchy FPS Graphics performance is more like watching a movie (think 24fps) so again no twitchy stuff Don't expect to use Teamviewer for free. Their free for personal uses policy is a scam because they can arbitrarily say "commercial use detected" and force you to pay for it. Just google "Teamviewer commercial use detected" and fine the comically long complaint topic about this issue, for which Teamviewer can't even bother to respond. Even if let's say RDP doesn't work, there are so many free alternatives out there such as NoMachine, Parsec, even VNC just to name some. But note that remote desktop is still remote desktop. If you expect to be just like you are sitting next to the workstation then you will be disappointed.
  19. More goofu over lunch about 905p in a VM marktech.us reported something similar about 1.5 years ago. - Source 1 Performance in a VM (albeit Hyper-V and not KVM/qemu) drops by about 3.5x 80% bare metal can be achieved with Q32T1. This points towards 905p just having higher latency in a VM by default e.g. vfio latency again? Reddit suggests to run qemu with fifo scheduler to reduce latency (15-50us albeit with Intel). - Source 2 That reminded of a recent topic on here which user uses vcpusched in the xml to reduce latency to 20us to 200us (again albeit with Intel) - Source 3 The parameters for vcpusched can be found on the official libvirt wiki - Source 4 The user reported rr to seems to work better than fifo Probably will avoid the 2 commands in the post if possible. Particularly, kernel.sched_rt_runtime_us=-1 carries the risk of system lock up - Source 8 That brought me to another topic on here which also mentioned fifo optimisation, specifically for Threadripper - Source 5 This method changes the real time scheduler of the qemu process using chrt command Details on chrt parameters - Source 6 Which priority to select (hint 99 is highest) - Source 7 The user, however, reported that this fix is not effective in reducing latency, unlike the vcpusched topic above. UPDATED TO DO LIST Retest with Hyper-V feature uninstalled in Windows guest Not holding out high hope for this but it's the easiest quickest one to retest Redo and retest xml using vcpusched, iothreadpin, and remove EPYC emulation Retest with vapic off with hyper-v on Retest hyper-v on vs off in xml Just be happy that a bottlenecked 905p is still 2x faster than my fastest NAND NVMe. Source 1: https://marktech.us/tag/optane/ Source 2: https://www.reddit.com/r/VFIO/comments/6ze435/high_dpc_latency_in_guest/ Source 3: https://forums.unraid.net/topic/87948-windows-10-kvm-high-dpc-latency/?tab=comments#comment-817403 Source 4: https://libvirt.org/formatdomain.html#elementsCPUTuning Source 5: https://forums.unraid.net/topic/84095-terrible-perfomance-on-threadripper/?tab=comments#comment-778928 Source 6: https://linux-tips.com/t/how-to-use-chrt-command/268 Source 7: https://stackoverflow.com/questions/8887531/which-real-time-priority-is-the-highest-priority-in-linux Source 8: https://access.redhat.com/solutions/1604133
  20. testdasi

    NAS

    If you are comfortable building a PC then you can consider something similar to Hoopster's signature. Used parts shouldn't be too expensive.
  21. testdasi

    NAS

    I don't think any of the answer relates to ready-to-use systems. We were talking about building a PC and then install Unraid as an OS.
  22. A few pointers: SSD array (especially SSD array with parity) is not officially supported. While it mostly works fine, there are issues / limitations No trim support so you will have to rely on passive over-provisioning to maintain write speed (i.e. leave empty space) Some SSD garbage collection / write leveling can cause parity error For best performance, you want your SSD in the cache pool or mounted as unassigned devices. For NVMe SSD, the best performance is achieved by passing it through to the VM as a PCIe device. Of course, the disadvantage is only a single VM can use that device at any one time (and given it's storage, you probably want a single VM to use it exclusively). In your case, I would suggest Get an additional cheap HDD (or SATA SSD) to put in the array Use 1 of the 2 NVMe in the cache pool (e.g. for vdisks of various VMs you want to run, docker image, docker appdata, etc.) Pass through the other NVMe to your daily driver for best performance. Alternatively, mount it as unassigned devices and put more vdisk / storage etc. on it. Expect issues with passing through AMD GPUs due to reset issues. To date, it is still not fixed. Generally speaking if you have 2 GPUs, passing through Nvidia is less troublesome (with no guarantee). VM networking is 10Gbe virtio ethernet (bridge to the host ethernet) I can't speak for your 10Gbe network plan since I don't do it. Personally though I would not rely on a VM for critical infrastructures such as a network firewall. There are too many variables of things going wrong e.g. I have seen recent issues with FreeBSD based VM and passed-through NIC. You don't need ECC RAM. It's just additional cost for practically no impact. Unraid isn't FreeNAS ZFS. For what you are planning to do, Unraid may not be the best fit to be honest. Something like Proxmox or VMWare ESXi may be a better fit. Even Ubuntu + KVM probably also works.
  23. It's very easy. Back up your current xml and save it somewhere safe. Edit the VM in the GUI, add the VNC network card, double check that you are not passing through any other PCIe devices. Save and done To restore, open your template in xml mode and copy-paste your backup over wholesale.
  24. You need to enable syslog server (Settings -> syslog server) and mirror your syslog. The next time it hangs again and you reboot, you can send the full mirrored syslog. That will include events prior to and likely related to the issue. Your current post-reboot syslog is unlikely to help much.
  25. Important updates re NVMe storage devices in a Threadripper Windows VM: So I ran CDM 7.0.0g benchmarks for all my VM NVMe. I found the 905p latency for 4k Q1T1 to be about 60us (micro-seconds). The number is more or less the same for both read, write and mixed (70-30) load. That is great because NAND SSD performance tends to dip quite a bit with mixed load, which tends to happen more frequently in real life workload. This about half that of my best performance NVMe (the 970 Evo) i.e. twice as fast. While doing this test, I also discovered how the 970 Evo kinda cheats the benchmarks. Write caching is allowed and turned on! With write-caching disabled, write latency went from about 60us to around a whopping 2000us (2ms)! With write-caching enabled, write latency is about 60us while read at about 120us and mixed at about 115us. The 905p and PM983 do not allow enabling write-caching at all. Of course I'll still enable write-caching for the 970 Evo as cheating or not, it does positively impact performance. Those who read the 905p spec sheet will immediately notice 60us is more than 5x the headline latency of 11us. I certainly don't expect headline performance in real life but my estimate says something closer to 16-17ms is reasonable. That means my test is still 3.5x or so higher than the expected latency. This is bugging me quite a bit. My goofu gave some clues: Just Threadripper being Threadripper, there's a latency floor of about 40us - 60us due to vfio latency (source 1) Disabling vapic may help but potentially need kernel 4.7 to support AMD AVIC (source 2) I can test removing vapic hyperv tag but kinda am reluctant to downgrade to 6.8.3-rc7 for 5.x kernel. Need the Soon™ 6.9.0-rc1 😅 Removing Hyper-V from Windows may help with latency (source 3) This, however, runs counter to my previous tests showing that Hyper-V helps with storage performance Or perhaps, it's about uninstalling Hyper-V feature from Windows VM (and not a host xml parameter). Installing Hyper-V role (feature) on Windows (Server) 2016 increases latency (lower IOPS) (source 4) This is the only source that relates directly to the 905p + Threadripper (albeit 1950X) The change in performance surprisingly (or not) matches my case. I'm still using EPYC emulation so wondering if perhaps that plays a role. TO DO LIST Retest with Hyper-V feature uninstalled in Windows guest Retest hyper-v on vs off in xml Retest with vapic off Retest without EPYC emulation Just be happy that a bottlenecked 905p is still 2x faster than my fastest NAND NVMe. Source 1: https://www.reddit.com/r/VFIO/comments/b5qyza/any_tips_on_reducing_vfiopci_latency/ Source 2: https://www.redhat.com/archives/vfio-users/2017-March/msg00005.html Source 3: https://superuser.com/questions/1368076/severe-input-latency-lag-on-threadripper-2990wx Source 4: https://social.technet.microsoft.com/Forums/en-US/13ff0892-38e2-4505-b923-c89caeafaaf8/hyperv-performance-hit-on-intel-optane?forum=winserverhyperv Useful tool: convert IOPS to MB/s: https://wintelguy.com/iops-mbs-gbday-calc.pl (note: IOPS is just 1/latency in seconds)