Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. There is an rclone docker but configuring it directly via plugin is easier.
  2. For (4), watch SpaceInvaderOne video. You have 2 GPU so it's easy. Put the 710 in primary slot, dump the 1060 vbios. Shut down, put the 1060 in primary slot, dump the 710 vbios. Alternatively, go into your BIOS and set the 710 is primary, dump the 1060, then set 1060 as primary and dump the 710. Also, just to double check, by "Check" you meant you have done ALL of the items you checked? i.e. it's not exclusive fix, it's meant to be done as a combination progressively.
  3. It's just one of the potential fixes you can try. Personally, I have found vbios is less useful without booting Unraid in Legacy mode. There's something about UEFI that prevents a GPU from resetting properly.
  4. Sounds like a clue? ... Copy failed - is rclone running? ... You need to unmount everything (and kill all rclone processes if there's any still running) before you can update the plugin. I tend to do it after a restart.
  5. Janky solution: set up a VM, map share to VM, run ftp client in VM and copy to the mapped share. Advanced solution: install rclone plugin. I believe rclone supports ftp remote (caveat: I have never used ftp remote myself). Then rclone mount and rclone sync will serve your purpose.
  6. Mounting to /mnt/cache won't work if docker is mapped to to /mnt/user. If both use /mnt/cache then it would work. In other words, both the docker and rclone must be pointing to the exact same path, because as you said, streaming is done in RAM.
  7. By HT core, you mean physical core (i.e. a pair of connected "cpu" in Unraid)? Now assuming your core assignment is 0 + 1 = 1 physical core (e.g. 2+3, 4+5 etc pairs - double check it on your Dashboard, it is not necessarily always the case e.g. it may be 0+8), then you might try isolating 2-7, 10-15 to use for your VM and it might actually improve your performance. This is due to the Ryzen design which essentially, in Intel lingo, "glues" 2 CCX of 4-core each together (so 0-7 = CCX 0, 8-15 = CCX 1). Spreading the load evenly across CCX may actually improve performance. Based on my own test (albeit on Threadripper but it's a similar design), 7 core is just as fast as 6 core on some workloads! Also perhaps obvious: you need to restart after changing CPU isolation for it to work. While you are at it, add this bit before </cputune> in the VM template. <emulatorpin cpuset='8,9'/> It uses 8+9 for the emulator, which (a) saves your "main" VM cores from doing emulator work and (b) avoid core 0, which is preferred by Unraid. Now if that does not improve on things then try these: Install the tips and tweaks plugin and switch your CPU governor to High Performance (instead of the default On Demand). This should ensure your cores run at full all-core boost consistently, which improves fps consistency. Update to Unraid 6.8.0-rc3 and start a fresh new template with Q35 4.1 machine type. Start fresh template because the GUI can't handle the complicated switch from i440FX to Q35 Q35 because it helps with PCIe support Unraid 6.8 because it has qemu 4.1 which again, improves PCIe support.
  8. @nuhll: oh wow, that's cool. I shall try that tonight. Perhaps you need to change your mount script mount points to /mnt/cache/[something something]?
  9. vbios is never a "required" thing. It's a "fix" thing i.e. if you have problems passing through a GPU (regardless of brand), that is a fix that you can attempt. It is particularly relevant to passing through a primary GPU. In fact, my general recommendation is, where possible, dump your own vbios regardless. It can only help with stability.
  10. @gcoppin: This topic was already resurrected so you have made it into a zombie. Please raise a separate topic with your own details. Whatever was done in 2015 and 2018 is unlikely to have any relevance in 2019.
  11. I believe FireFly III docker is already available on Unraid Community Apps. I got it to work without any special skill required.
  12. Leave it at Auto. You are overthinking it.
  13. Don't purchase it yet. Start with trial, make sure things work properly THEN purchase. With PCIe pass-through (required for your number (3) question), there can never be a 100% guarantee that it would work unless someone has it working with the exact same config as yours (exact to the brand and model). Some tips to make your life easier: Watch SpaceInvaderOne guides on Youtube. They are more helpful than LTT proof-of-concept. Boot Unraid in legacy mode - when building your USB stick, disable UEFI to ensure that if it boots, it boots in legacy mode. This is particularly relevant since you are planning to pass through your primary GPU to a VM (i.e. avoiding the infamous Nvidia driver error 43) If you have not bought the CPU + mobo yet, perhaps opt for something that has integrated GPU + a mobo that allow you to use the integrated GPU as primary (a browse of the owner's manual pdf should tell you if it's possible via BIOS settings). Booting Unraid with integrated GPU enhances your chance of passing through a dedicated GPU to a VM (i.e. avoiding the infamous Nvidia driver error 43) You can also dump your own vbios and use it (watch SpaceInvaderOne guides for details). This is easier to do if your config has integrated GPU that Unraid can boot with (see my point above). Again, this is to avoid the infamous Nvidia driver error 43.
  14. Hope it swapping the card works for you. It if doesn't, with cards that don't like being passed through, the following seem to help: Boot Unraid in legacy mode Set primary GPU as something other than the to-be-passed-through card in the motherboard BIOS Use Q35 machine type Dump your own vbios and use it For Nvidia, disable Hyper-V. You have only done (3) and (5) so perhaps try the other 3 workarounds.
  15. You are using Q35 3.1 but I don't see this bit of code at the end of your xml before </domain> <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline> That bit of code coupling with the root port patch, makes your Q35 PCIe slot run at the right speed. Without that, you might have some unexpected issues with passing through devices. Perhaps try adding that bit of code before </domain> Alternatively, and preferably, update to 6.8.0-rc3. It has qemu 4.1 which has better PCIe support. I no longer need that bit of code in my VM template and things seem to be running fine so far.
  16. Each core has its own L1 / L2 so it scales pretty well. In contrast, L3 is shared between groups of cores. As you can see on your numa config, each of 6-8 and 9-11 groups has its own L3 cache. So depending on how the cores are assigned (e.g. 6-9 is different from 6-7+9-10 despite both being 4 cores) AND the exact circumstances of the test run, your test results will differ. What issues do you have on your system that would lead you to think it's related to L3 performance?
  17. When you click on Webgui, it should open another browser window. What shows up on the Address bar? Also as a side note, you shouldn't give any docker access to the /mnt path. That is like giving your house key to your mother-in-law. At the highest level, it should point to your /mnt/user/[sharename]
  18. How did you count your used size? Attaching diagnostics may help but it won't help if you have something quietly writing data to your drive. Also, btrfs or xfs on the cache?
  19. I think I found a bug with the rclone-beta plugin. It is now looking for the config in the rclone plugin folder instead rclone-beta like it previously did. Sounds to me like a copy-paste bug. Switched to rclone plugin and everything works, but then I don't do exotic stuff, just plain old empty mounts so I reckon I have never had the need for the latest beta build anyway.
  20. It depends. In my own experience, any VM that needs OVMF to boot should be on Q35. I440fx works better for VM booting in SeaBIOS. For Windows, I have found switching to Q35 machine type to resolve a lot of PCIe pass-through issues, especially the infamous error 43 for Nvidia GPU. For MacOS, Q35 is a pre-requisite.
  21. I posted this on another topic also about being stuck at Tiano core. With cards that don't like being passed through, the following seem to help: Boot Unraid in legacy mode - I purposely disabled UEFI when building my USB stick to ensure if it boots, it 100% boots in legacy mode. Set primary GPU as something other than the to-be-passed-through card in the motherboard BIOS Use Q35 machine type (I recommend starting a new template from scratch AND use Unraid 6.8, save you the trouble of needing the root port patch) Dump your own vbios and use it
  22. As a starting point, are you booting Unraid itself in legacy mode i.e. NOT UEFI? With cards that don't like being passed through (I vaguely remember RX 560 being one of them), the following seem to help: Boot Unraid in legacy mode - I purposely disabled UEFI when building my USB stick to ensure if it boots, it 100% boots in legacy mode. Set primary GPU as something other than the to-be-passed-through card in the motherboard BIOS Use Q35 machine type (I recommend starting a new template from scratch AND use Unraid 6.8, save you the trouble of needing the root port patch) Dump your own vbios and use it
  23. vfio stubbing shouldn't cause your server to fail to boot unless you have stubbed something critical. Perhaps, you may have some other inherent issues. Anyway, start with the simplest "fix". Are you booting Unraid in legacy mode (i.e. NOT UEFI)? That simple step solves a lot of my PCIe pass-through issues so I would suggest starting from there.
  24. @jonp: by "select Q35 for Windows-based VMs", you meant as part of Unraid pre-built VM templates? As far as I know, the VM xml doesn't have any tag that says "expected OS" or something like that. Picking PC type = Q35 is a generic qemu option so it would take effort on Unraid dev part to disallow it. I sincerely hope you are not talking about disallowing Q35 as a blanket ban because that would be a catastrophic mistake. Putting effort into banning something that causes no harm to the majority of users while possibly helps some (even if a niche group) is nuts. On a related note, I believe qemu 4.1 (Unraid 6.8.0-rc) no longer requires the patch. I removed the custom xml tags and my PCIe runs at full x16 speed as far as I can tell.
  25. Updates: Updated to 6.8.0-rc3 from 6.7.2. I have not used rc on my prod server for a long time now but decided to do it this time. qemu to v4.1 is a much-appreciated upgrade for better PCIe support (e.g. no need the manually-added bit of xml tags for Q35). I no longer need the qemu:commandline piece of codes (aka the root port patch) and my PCIe shows up as x16 now. Support for Wireguard. Now I have no clue how to set it up properly so will wait for Spaceinvaderone guide but it's better to be on 6.8 and know / work around any teething issues before his guide is out. On the subject of teething issues, Spaceinvaderone reported some rather minor gripes with rc1. My attempt to run the Kingston 128GB SSD to the ground has failed, just as my other failed attempts to run SSD's to the ground. This speaks volume on the longevity and endurance of SSD if used correctly e.g. frequent trim, minimal long-term data (i.e. practical over-provision), etc. Got a 4TB Samsung 860 EVO as NAS drive (i.e. long-term data) because my 2TB 850 Evo has already filled up. In the process, I have also rejigged my assignments. The 2TB 850 EVO is now my cache drive. This resolves a rather peculiar issue with Plex. When my Plex db is on the i750, sometimes media thumbnails fail to load properly. I know it's not a data corruption because if I refresh the page, things show up normally again (and I don't have db corruption errors that others have reported with 6.7). I have done my testing and long-story-short, I think it's because the i750 has some funky latency that messes up Plex / browser image load timeout. The i750 is now used as my temp drive i.e. heavy write. I store minimal long-term data on it to minimize write magnification. My Crucial MX300 is used as my intermediate drive i.e. for data waiting to be processed. This will be another project for me to run to the ground - we'll see how "successful" I am (already 44TB written and only 1 bad block, 9191 spare blocks to go!). My Kingston 128GB now serves as mount points for rclone and associated logs. Having mount points on the array occasionally caused my HDD to spin up unnecessarily. I retired the 300GB Toshiba 7200rpm 2.5" HDD for the nth times. Given its 100MB/s speed, I really don't miss it that much. My important data is now 1-2-4 (1 piece of data, 2 locations, 4 copies - primary, online, offline, offsite).
×
×
  • Create New...