Tuftuf

Members
  • Posts

    247
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Tuftuf

  1. Hello, Currently have a Ryzen 1700, It's currently used for my dockers and a gaming vm, all access is via streamline but performance has been ok. I can't say if how close to native is anymore as I'm not passing through a monitor directly. I have a spare intel 270 chipset board, I could get a 7700k and get a PC up and running. However 4c/8t would be a bit tight to run two windows VMs. Also looking at intel 300 chipset board with a 9700k Also looking at x299 boards and 7920x. I will be splitting the load between my current system and the new one, however I would like 2 gaming pcs in the new one and if possible spare resources for work based vms (no gpu) anyone want to make any suggestions or comments?
  2. I have yet to test anything i've seen in there to see if it makes any difference, but needed to save the link somewhere! http://mathiashueber.com/amd-ryzen-based-passthrough-setup-between-xubuntu-16-04-and-windows-10/ <iothreads>6</iothreads> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='9'/> <vcpupin vcpu='10' cpuset='10'/> <vcpupin vcpu='11' cpuset='11'/> <iothreadpin iothread='1' cpuset='0-1'/> <iothreadpin iothread='2' cpuset='2-3'/> <iothreadpin iothread='3' cpuset='4-5'/> <iothreadpin iothread='4' cpuset='6-7'/> <iothreadpin iothread='5' cpuset='8-9'/> <iothreadpin iothread='6' cpuset='10-11'/> </cputune>
  3. To restart nginx i've been having to use both /etc/rc.d/rc.nginx restart and /etc/rc.d/rc.nginx stop Quite often checking its status will show its still running. So make sure it's really closed. It doesn't want to close gracefully /etc/rc.d/rc.nginx status Running 6.5.1rc3 for around 12hours now, but my system is still under 50%. I think I need to be starting new dockers, checking app store using dockerhub etc before I see the memory grow. Will see how the next day or two goes.
  4. I just searched the syslogs in my older diags but I don't see any. They are linked in the other thread in the first post. I did see some OOM / memory errors when this occurred a few days ago but I believe they were docker attempting to write something. Can't seem to find them now.
  5. I was running a parity check for 20hours of the 32hours uptime.
  6. In my previous thread, it was around 6Gb usage, but I generally run my system close to memory limit. I've stopped a few things to avoid this issue recently, which is why I've seen larger memory usage numbers. It's grown to fill the gap. I will update to the latest prerelease later today. Currently, the system is providing my internet connection while I rebuild something else, trying to avoid that much downtime atm.
  7. I could but I'm not eager to try it. What did you have in mind? I was thinking of to the latest beta. It took 34 hours since my reboot for the issue to occur, I can't sit in safe mode for that long.
  8. Happening since the upgrade to 6.5.0 If it reaches close to 99% it will freeze all dockers, but VM's continue ok. Restarting nginx fixes it for a time but I often lose access to docker/vm icons on dashboard after restarting nginx. Not always though.
  9. Any ideas how to figure out the cause of this spike in memory usage? ngnix shouldn't be using close to 40%... ^^^the last line, nginx.. Restart nginx and memory usage drops. Previous thread related to this tower-diagnostics-20180401-2019.zip
  10. Before restarting ngnix. After restarting it, nginx is now displayed on page 3 using 79Mb. I admit my memory usage is generally high, but prior to the upgrade to 6.5.0 I've not seen my dockers all grind to a halt. Internet handled through a VM DNS handled through a docker. So this isn't something I've run into regularly or I would have noticed. VM continue to work when this issue occurs Dockers all grind to a halt. Restarting nginx free's up memory and everything springs back into action, but I'm trying to understand if nginx is problem or something else.
  11. Since upgrading to 6.5.0 I've had issues with all my dockers stopping working, in other words plex stops working along with everything else. I saw that nginx was using a large portion of it and restarted it. Memory usage goes from 99% to 66% usage and all my dockers start to recover by themselves but I still can't see any dockers or VM's on the dashboard. I've changed disk cache vm.dirty_background_ratio to 4 and vm.dirty_ratio to 8. 0327 - diagnostics was taken after restarting ngnix, and changing Since making the change to cache settings i've not had another docker crash, but again my server is sitting at high memory usage when last night it was around 70% 0329 - diagnostics was taken while memory is sitting at 96%. Restarting nginx will reduce the memory usage to normal. Any ideas? Anything I'm missing? tower-diagnostics-20180327-1721.zip tower-diagnostics-20180329-1234.zip
  12. I've always had USB sound stuttering before and after the NPT fix, I mainly notice it when gaming prior to the NPT the whole game would lag, now I just lose sound. This was fixed by passing through the USB controller from my motherboard, but this was only made available to me by the latest bios update splitting the groups better.
  13. I'll add that I agree this is likely a problem on your system, if you are running EFI boot Unraid and passing through your primary GPU, or was it just Ryzen related either way EFI + Primary/Ryzen passthrough = Problems. Legacy boot, no problems for myself. I may well be using uefi compatibility mode, that allows booting either but its legacy boot from the USB for sure.
  14. I've just updated my bios to F20, the IOMMU groups have changed again. IOMMU group 0: [1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 1: [1022:1453] 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge IOMMU group 2: [1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 3: [1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 4: [1022:1453] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge IOMMU group 5: [1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 6: [1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 7: [1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B IOMMU group 8: [1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 9: [1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B IOMMU group 10: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 11: [1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric Device 18h Function 6 [1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 12: [1022:43b9] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43b9 (rev 02) [1022:43b5] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43b5 (rev 02) [1022:43b0] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b0 (rev 02) [1022:43b4] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1b21:1343] 03:00.0 USB controller: ASMedia Technology Inc. Device 1343 [8086:1539] 04:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) [1969:e0b1] 05:00.0 Ethernet controller: Qualcomm Atheros Killer E2500 Gigabit Ethernet Controller (rev 10) IOMMU group 13: [10de:1189] 07:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 670] (rev a1) [10de:0e0a] 07:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1) IOMMU group 14: [1022:145a] 08:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a IOMMU group 15: [1022:1456] 08:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor IOMMU group 16: [1022:145c] 08:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller IOMMU group 17: [1022:1455] 09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455 IOMMU group 18: [1022:7901] 09:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 19: [1022:1457] 09:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller
  15. The system itself was used as 2 desktops, but not at the moment. Really I'm just collecting gpus before building them into a separate rig which might be sooner than planned due to the overall system temperature at the moment. I've looked at nvOC and a few others, also thought about an ubnutu vm and using docker myself but that was mainly for testing on unraid. Looked at some of the 12 and 16gpu systems recently, still making plans. You don't happen to have a parts list for your system do you?
  16. I did get it working. I placed the first card in the pcie slot 2, I'm sure it clicked in but maybe not it seems, while keeping my old gpu working in slot one for dumping the bios of the new card and testing. Once I placed the other card in slot 1 somehow the slot 2 card had come out of the pcie slot a little and this was no longer visible until I had removed the top card. But in all this, I noticed I only had 650w, not an 850w power supply. So I swapped one of the cards with another pc in the house, now the system has 1060 and 1080ti. -PSU isn't good enough for running both, it would manage since I'm limiting their power usage but as the server is transcoding and cpu often used. I need to replace the PSU to run both in this system. -Space wise, I had to remove harddrives cages from the system to place the cards in and replace them afterward. Overall not what I had planned, and it really makes a difference the size of the 1080ti vs 1060 mini. Also, I could not dump the vbios of the 1080ti, GPU-Z didn't support it. I downloaded one and edited that instead.
  17. I was running 1x 670 and 1x Firepro. I swapped the Firepro for a 1080ti and made sure I had it working. Then swapped the 670 for another 1080ti. Upon reboot, the second is not detected (which was tested first) and the first is detected. These were both intended to be passed through to a VM. IOMMU group 0: [1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 1: [1022:1453] 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge IOMMU group 2: [1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 3: [1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 4: [1022:1453] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge IOMMU group 5: [1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 6: [1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 11:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a [1022:1456] 11:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:145c] 11:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller IOMMU group 7: [1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 12:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455 [1022:7901] 12:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) [1022:1457] 12:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller IOMMU group 8: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 9: [1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric Device 18h Function 6 [1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 10: [1022:43b9] 03:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43b9 (rev 02) [1022:43b5] 03:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43b5 (rev 02) [1022:43b0] 03:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b0 (rev 02) [1022:43b4] 04:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 04:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 04:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 04:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1b21:1343] 05:00.0 USB controller: ASMedia Technology Inc. Device 1343 [8086:1539] 06:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) [1969:e0b1] 07:00.0 Ethernet controller: Qualcomm Atheros Killer E2500 Gigabit Ethernet Controller (rev 10) IOMMU group 11: [10de:1b06] 09:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1) [10de:10ef] 09:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1) EDIT ---- I'm blaming lack of power. When building this server it seems I didn't intend to get 1080ti's. EDIT -- I would have been wrong. Seems I didn't click the card in as it didn't show up solo either, which it did originally. I've also switched to a 1060 and 1080ti as power may still be a problem but wasn't the problem.
  18. Yup I dumped the vbios (rom file) on the 670 myself and used it, I had to borrow a GPU off a friend to get it working Tomorrow I'm hoping to use the hex editor method for the 1080ti and hopefully will be able to do it easy enough since I have two of them coming. TBH I don't really need windows on this machine but I want a benchmark before I try getting a docker mining on the gpus. This machine has turned into a little more server than desktop since I let a few friends have access to my plex server
  19. For myself I use a Nivida 670, will be switching to a 1080ti tomorrow If i use the UEFI enabled unraid boot menu, it really messes up me passing through my primary GPU. I have to use the old method of booting then I can use both GPUs. This was true for rc15*, need to retest now i've updated to the final release of 6.4
  20. Hello, Currently, I have 5 vlans passed to my unraid server and plex is assigned an IP on one of them. I want to have plex available directly on two different subnets, both vlans are available to unraid but i'm not sure how I can add the second one to the docker. Any suggestions? Thanks,
  21. I tried It's an issue I live daily with, generally, use Safari for most things if I'm actually on a mac but I can't see CPU usage. Webhooks issue. I'm still tempted by a 1950X but seems they are struggling a little more than the Ryzen range with passthrough related etc. I suggest posting in the Pre Release section with the error and diags for 6.4. EDIT - Looks like you have already
  22. If you're using Safari you don't see any stats and need to use Chrome to see usage.
  23. I would strongly suggest following something similar to what SSD has said. I've not looked into Theardripper yet or the boards, but can you really run all them slots at 16X? I would of expecting you to be down to 8x at least with that many cards. Also with my board the nvme slot shares bandwidth with one my pcie slots. (Based on reading the manual) so i've never populated it.
  24. @luisv I know you've kept c-states off a while now due to the issues. I don't think anyone has reported C-State issues since the 'fix' was added for it, which was awhile ago. I'm just wondering if its time to try figure out what else it could be. @everyone-else else Has anyone else had C-State issues with the recent versions? EDIT: Maybe I should have read @david279's post first.
  25. Thanks for your post! What you have explained is very similar to myself apart from I've not given up on the passthrough in the top port and I've not purchased a usb pcie card yet. I started off passing through just the 670, then I added a 6950 but replaced that with a Firepro I had spare. I'm running two GPU's in my system which is the main reason ive not tried the PCIe usb card yet. Passing through the Keyboard and Mouse has always worked great. USB audio has drop outs, but I can play most games with only a few audio hickups.. some are much worse than others. After the NPT fix I also had the 43 error, made a couple of posts about it in the pre release thread. I don't really have any proof but I felt that I had more issues after my VM was shutdown after the NPT fix than before. To be clear I mean shutting down the VM and starting it up again. I installed the NPT fix (before Limetech added it to rc10) using a dvb plugin build scripts and supplying the patch before it actually built the thing. Then the fun started. I rebooted and tested my Windows VM.. It worked.. GREAT. I rebooted and my GPU would not start up with error 43. NOT SO GREAT! I created a new VM and my GPU would not start up Error 43. NOT SO GREAT! (Note this error 43 is presented to me within Device Manager, not at boot like I've seen others post) I repeated this many times... nothing seemed to fix it. New VM, New Drivers, Old drivers. Then I rebooted my whole server... and Error 43 disappeared. This led me to think that there are some issues after a VM is shutdown with the hand off with the GPU. I have no fact or really any proof to back this up. But I've not had another error code 43 since then first few days of testing.