Rhynri

Members
  • Posts

    55
  • Joined

  • Last visited

Everything posted by Rhynri

  1. Geforce GPU Passtrhough For Windows Virtual Machine (Beta) I was looking up something unrelated and stumbled upon this Nvidia link. This could potentially mean an end for Code 43 issues, although the text at the bottom makes me wonder if you can pass through a primary card under this regime. Worth a test for a brave soul, as I use my VM as a work machine and can't afford bricking it right this second to test, although I will once I have time if someone else hasn't by then.
  2. Same here! We meet again, @testdasi. It's almost like we use unraid in similar manners.
  3. I recently switched from having disk images to passing through NVME controllers. This can drag your VM across nodes if the drive is on a different one from the rest of the VM hardware. I had a well behaved VM (memory wise) that now apportions a bit of memory across nodes).
  4. @jonp - Am I correct in thinking that we have this in 6.8.0-rc3? Edit: Actually it looks like a lot of these "unscheduled" things are already present. Maybe time for some pruning?
  5. Must have just been a fluke on my part, I'll delete this.
  6. @SpaceInvaderOne - What is that screen you are using for your server name and youtube view count? It's super cool and I'd love to have one to display various home-automation data like power consumption and solar production, and real-time data from my Weatherflow station.
  7. Can confirm. Use a 2010 MBA as my media acquisition device and get 17-18mb/sec over SMB to my server, which is great considering how old that wifi chip is.
  8. +1 for cloning+history for XML. Reverting states is a pain. @glennv - as a side note, GUI editor is getting better at not blowing away custom changes. Can't remember the critical OSX ones to tell you if it's blowing them up or not.
  9. Not sure if this got added since I last did a swap. If so, please disregard. I have my drives in external USB docks. I used to have them all in stacked dock (with all four) but then I had to spin up the whole thing just to access one. When I moved them I had to do it one at a time because Unraid saw them as new drives (with the different controllers) despite the fact it could easily see the serial number under drive info. Can we have unraid suggest drives that it's seen in the array in the correct positions via serial numbers? I'm intending to move them all into the case, but if this hasn't changed I'll need to do it piecemeal again.
  10. Do you still need to have the extra root hub verbage in the xml?
  11. What made you decide to start making videos for the community?
  12. Yeah, I think I bricked my cache data trying to add the drive back in the wrong way too. >.<
  13. Hello! I'd like to submit a feature request for a setting that prevents the array from starting if there is an issue with the cache drive/array. I recently noticed that my motherboard was missing a molex power plug, so I shut down the system and popped the plug in. Somewhere along the way, I bumped a plug on my U.2 mounted NVME drive, loosening it just enough to take it off line. Upon starting unraid, the Array started as normal, but obviously the cache array was offline. My cache array is a BTRFS software RAID: Data, RAID0: total=1.11TiB, used=1.11TiB System, RAID1: total=32.00MiB, used=96.00KiB Metadata, RAID1: total=2.00GiB, used=1.21GiB Because the array started without the cache drive intact, upon reboot I was greeted with the attached screenshot in my dashboard. That drive activity is a BTRFS disk delete... on the array my running VM that I took this screenshot on is working off of, because everything automatically started as normal and then proceeded to do this. Which is not only unwanted but wasteful and shortens the lifespan of the drives. So I'd like to respectfully submit a request to have a setting that prevents the array from starting if there is any disk error at all. If this is already a thing and I couldn't find it, I apologize for wasting your time. Now I'm off to re-add that drive to my cache.
  14. Unraid has been an absolute lifesaver when it comes to managing my home tech infrastructure. I’ve consolidated so much into one system it’s not even funny. And the support you guys give to your users is unreal.
  15. @binhex - Found a solution for that scanning issue that I had to manually lock Plex back to 1.14 for. (It was a while back.) Edit: The problem showed up in the logs only as Jun 03, 2019 14:55:55.967 [0x151e50971740] WARN - Scanning the location /media/[Library Name] did not complete Jun 03, 2019 14:55:55.967 [0x151e50971740] DEBUG - Since it was an incomplete scan, we are not going to whack missing media. One of my scanners (Hama) was silently failing on certain files. I had to put a bunch of debugs into the .py files to sort it, but once I realized that the latest version from the GitHub solved it. So if you encounter someone with scanning issues, have them refresh all their scanners/plugins. I'll try to keep an eye out myself.
  16. If you’ve manually specified a version other than latest.
  17. Ouch. Yeah, I'm very happy with your docker image and knowing we can roll back easily is just icing on the cake.
  18. Firstly, thank you for your response. Second, I appreciate the education; I wasn't aware plex pass was a form of beta, but that makes sense now that you've said it. I'll see what I can do through official channels, but thank you again for your time and providing this great container in the first place.
  19. The latest Plex container would not scan for, nor detect changes to, my library items. Manual scans would immediately terminate, and manual scanning was not required prior to the latest un-tagged versions. Rolling back to 1.14.1.5488-1-01 immediately rectified the problem and found the library items I've added since updating to the 'latest' version. I'm available for debugging purposes if you are interested @binhex. Judging by responses on the official Plex forums this may be an issue in the official release, but most of the threads I'm finding are referencing the Mac OSX version. Edit: Your container has been excellent one for many moons for me though. Just wanted to give my praise as well.
  20. Awesome video. I'd like to note that in "independent research" I got hwloc/lstopo included with the GUI boot in Unraid 6.6.1. So that's another option requiring about the same number of reboots as the script method. I.e. - Reboot into GUI, take snapshot, reboot back to CLI. Of course if you run GUI all the time, this is just a bonus for you. Also, here is a labeled version of the Asus x399 ZE board in NUMA mode. Enjoy, and thanks @SpaceInvaderOne! (Note: this is with all M.2 slots and the U.2 4x/PCIE 4x split enabled with installed media. Slot numbers are counting by full-length slots in order of physical closeness to CPU socket... so top down for most installs)
  21. Thank you very much for this. I completely understand if it's only available in GUI-boot. Just gives me an excuse to go see the GUI! Hopefully other people find it useful as well.
  22. I wrote a rather in-depth reply then accidentally deleted it and there is no undelete. Suffice to say moving the VM to the other NUMA node reduced the incidence of the problem and improved the rendering performance of the VM in question. It's still not gone but I think a lot of the remaining NUMA misses are related to unraid caching things, which is hardly a priority operation: numastat node0 node1 numa_hit 2773556844 1684914320 numa_miss 6233397 193845232 numa_foreign 193845232 6233397 interleave_hit 84430 84643 local_node 2773481539 1684881326 other_node 6308702 193878226 Starting from a clean boot and looking at numastat when booting the two important VMs yields very few numa_miss (es) relative to the previous configuration. This is after 8 days of uptime. @limetech - If you could please include lstopo in a future release I'd greatly appreciate it. I linked a slackware build for hwloc in a previous post in this thread if that helps. There are a few BIOS settings relating to IOMMU allocation in relation to the CCX's on Threadripper and I'd like to do some A/B testing with lstopo to see what if any difference they make. As I mentioned in that reply, it would also potentially be a useful addition to the System Devices page. Please and thank you for your time and effort in making Unraid OS awesome.
  23. It looks like it's trying to work. It will slow down the startup significantly and cause the numa misses to skyrocket. I've since discovered that only one of my VMs behaves this way. I'm wondering if I can move that one to the other node it keeps trying to allocate memory on and see if that fixes the issue. Does anyone know if it matters which cores are isolated? Say, if i want to move my isolated cores to the beginning (0-11 physical), instead of at the end (4-15 physical) if unraid cares at all?
  24. I've been looking into this, and I think it may have something to do with which NUMA node the GPU is on. I was able to force correct NUMA allocations by changing the memory size of my node0 VM to neatly fill the available memory on that node, then booting the remaining two, but that results in a super lopsided memory allocation (28,16,8), and it's a very manual process. I'm going to be asking around the VFIO community to see if there is anything I've been overlooking. I've been trying to install hwloc (slackbuild link) into unraid so I can have access to the very useful lstopo which would let me know which node(s) my pcie devices are on. I keep running in to compilation issues, however, so I'm going to keep working on that. However, the lstopo output as a standalone would be something very useful to have on the tools page as it gives you a very good idea of what devices are nested for pass-through... it's arguably as useful as anything on the [Tools]>[System Devices] page in terms of pass-through usage. I've also attached an image of what the lstopo gui output looks like. Example (not my system): # lstopo Machine (256GB) NUMANode L#0 (P#0 128GB) Socket L#0 + L3 L#0 (20MB) L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0) L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#2) L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#4) L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#6) L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#8) L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#10) L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#12) L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#14) HostBridge L#0 PCIBridge PCI 1000:005d Block L#0 "sda" PCIBridge PCI 14e4:16a1 Net L#1 "eth0" PCI 14e4:16a1 Net L#2 "eth1" PCI 14e4:16a1 Net L#3 "eth2" PCI 14e4:16a1 Net L#4 "eth3" PCI 8086:8d62 PCIBridge PCIBridge PCIBridge PCIBridge PCI 102b:0534 PCI 8086:8d02 Block L#5 "sr0" NUMANode L#1 (P#1 128GB) Socket L#1 + L3 L#1 (20MB) L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#1) L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#3) L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 (P#5) L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 (P#7) L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU L#12 (P#9) L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU L#13 (P#11) L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU L#14 (P#13) L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU L#15 (P#15) HostBridge L#7 PCIBridge PCI 15b3:1003 Net L#6 "eth4" Net L#7 "eth5"
  25. NUMA daemon source As for the webterminal, once it has enough text to get a decent scroll back the scrolling gets choppy and the typing lags a little. I do use a fairly old MacBook Air and chrome to access unraid, but it’s not something I noticed last build. It’s possible it’s just that machine being goofy too. I haven’t had time to research the issue fully, but I’ll look into it tomorrow and let you know if I find any suggestions.