slushieken

Members
  • Posts

    17
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

slushieken's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. I notice it is not 'all' of my hard drives - just 2 or sometimes 3, and it is always the same ones. Mainly it is the Toshiba 4.0TB MD04ACA400 that seems to spin up routinely.
  2. Researching for a fix I stumbled on this post. I am seeing the same behavior on 6.9.2.
  3. I am guessing by this statement and your link to to the Reddit article, you already know that Windows 8.1 is the go-to solution atm for this problem. I.E. 8.1 is harder to virtualize for your 3d. Interesting problem for sure! I wonder if you could virtualize under a different virtualization platform to solve it as well...
  4. Likely you need to use VNC - and the right VNC client as well. I remember that one VNC Client worked laggy and even froze the desktop every few minutes/seconds forcing me to disconnect and reconnect again over and over. Switching to another VNC client solved that - but then there were other problems that surfaced and the struggle was too much; so i quit trying to do that in Unraid. BTW, I was looking through your posts and your punctuation could use some work. Grunting and pointing will only get you so far.
  5. I just wanted to say this script is just... amazing. I have been working on cleaning up nearly 20TB of movies and tv shows for several years now split incorrectly over time due to this or that, settings errors on splits, etc. I have been having success over that time with many different methods and a hit and miss effort, but yours wrapped it all up in one fell swoop. Thank you.
  6. Thanks for coming back to let me know! I have not yet had the time to actually test this at the VM level for passthrough. Are there any more steps to get it going beyond that? RE: special configuration options for the VM definition? It would sure help me out, and anyone else when they come back across this if so.
  7. Step one complete: https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM Fix: Ok thats the first part done for me, I'll come back later to finish this up (or maybe someone else can). In the meantime if you have not already, make that change and reboot.
  8. I have been wanting to do this myself. I took your info to google and found: https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM There, I found the following, which is step #1 Should respond 'Y' if it is working and 'N' if not available/enabled. Of course if no, like mine, you next enter: Then you are supposed to reboot and check again. So, to summarize: 1. Did you reboot? If not, do that. 2. After reboot, re-run the 'cat /sys/module/kvm_intel/parameters/nested'. If it is still showing N, my guess is next to check around your BIOS to make sure all is enabled. I am about to test this myself here. Edit: OK - that didn't work for me... After a reboot still shows: $ cat /sys/module/kvm_intel/parameters/nested N Going to poke around the bios to make sure but I could swear I had enabled all those functions before...
  9. Lol; but seriously... there are many causes for mouse lag with VNC. If I connect with the novnc browser based default client, even when the host is right next to me, I am not able to navigate due to the extreme mouse lag. In my case I had to go through several different VNC clients before I found Chicken of the VNC worked, mostly. Once I used the native desktop tool also all problems disappeared. VNC through the hypervisor with windows or Linux guest seems a bit buggy at best... Better to to use it as a fall back and test more than one VNC client.
  10. Saarg, you are the best... that's the answer I was looking for. Tuftuf thanks to you as well... my pass through thoughts were indeed based on faulty instinct, which led me down the wrong path for the solution.
  11. I want/need to pass through the VT-d and/or IOMMU feature to my guest. It is required to be visible/enabled CPU feature for the OS I am installing for it to install - otherwise it complains and refuses to install. I work with a lot of 'demo' guest OS packages so these are not things I can alter (IE to not require that on install). In VMware if this was enabled in a few spots it would just pass on through - I know this is an apples to oranges comparison here but I was hoping for something similar in functionality... Unraid: root@Storage:~# egrep -c '(vmx|svm)' /proc/cpuinfo 8 Guest (ubuntu): shows 0 (zero) with same command. Hopefully skirting around the questions of why and choose another way; how do I pass that through? 1. Do I have to disable/unbind that device/IOMMU group first in UnRaid to do that? 2. If it must be unbound - will it cause any obvious issues? 3. Which IOMMU group would carry that? Group 0 and/or possible group 1 seem most likely. If it is 1; I am concerned I am also adding my RAID controller.... IOMMU group 0 [8086:191f] 00:00.0 Host bridge: Intel Corporation Skylake Host Bridge/DRAM Registers (rev 07) IOMMU group 1 [8086:1901] 00:01.0 PCI bridge: Intel Corporation Skylake PCIe Controller (x16) (rev 07) [9005:028c] 01:00.0 RAID bus controller: Adaptec Series 7 6G SAS/PCIe 3 (rev 01) IOMMU group 2 [8086:1912] 00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06) IOMMU group 3 [8086:a12f] 00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31) [8086:a131] 00:14.2 Signal processing controller: Intel Corporation Sunrise Point-H Thermal subsystem (rev 31) IOMMU group 4 [8086:a13a] 00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1 (rev 31) IOMMU group 5 [8086:a102] 00:17.0 SATA controller: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] (rev 31) IOMMU group 6 [8086:a169] 00:1b.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Root Port #19 (rev f1) IOMMU group 7 [8086:a16a] 00:1b.3 PCI bridge: Intel Corporation Sunrise Point-H PCI Root Port #20 (rev f1) IOMMU group 8 [8086:a112] 00:1c.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #3 (rev f1) IOMMU group 9 [8086:a114] 00:1c.4 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #5 (rev f1) IOMMU group 10 [8086:a118] 00:1d.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #9 (rev f1) IOMMU group 11 [8086:a145] 00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H LPC Controller (rev 31) [8086:a121] 00:1f.2 Memory controller: Intel Corporation Sunrise Point-H PMC (rev 31) [8086:a170] 00:1f.3 Audio device: Intel Corporation Sunrise Point-H HD Audio (rev 31) [8086:a123] 00:1f.4 SMBus: Intel Corporation Sunrise Point-H SMBus (rev 31) IOMMU group 12 [8086:15b8] 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-V (rev 31) IOMMU group 13 [1b21:1242] 02:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller 4. PCI-Stub would help here? Other alternatives? 5. What configuration changes need to be made to the guest VM config to do that? Any needed for the host os (unraid)? 6. Anything I am not thinking of here? 7. Perhaps there is a way I can fake out the Guest VM into thinking it has that when it actually does not? I would prefer to have that functionality present, but as these are demo and not prod I expect it would be acceptable even without it actually being there if I can fake it. Thanks all.
  12. Steini, I just discovered this plugin... really want to use it. But, I am at at current release: I am sure it is a pain to do this each time, but can you compile again vs. current 6.2.1?
  13. I have been receiving this error regularly for some time. unregister_netdevice: waiting for lo to become free. Usage count = 1 Much of the time, it is harmless and nothing happens. Sometimes though, everything grinds to very nearly a full stop re: networking. This is full OS affecting and I cannot connect via the GUI at all, and even when connected via SSH/Telnet command completion becomes hard as the connection goes in and out. This situation remains so, until a full reboot can be executed. Can't test restarting docker only, as connectivity becomes so problematic at that point issuing commands becomes difficult. It seems, via research, this is a cross platform kernel related issue, and shows up most commonly when using docker containers with high network activity (for example, torrents). It seems it is also intermittent, and therefore hard to reproduce except in a production environment with lots of connections. Devs indicate it seems to be related to hairpin mode networking, and if this can be disabled it stops. Hairpin mode is used by default on containers. Suggested workarounds found so far are: Disable IPV6 in the containers. hard to do as even if docker is configured to not use IPV6, it seems it still uses it anyway. Additionally, even when disabled in the kernel via boot parameters, many say it did not fix the issue. Set docker networking to promiscuous mode. This disables hairpin mode. Anyone know how to do this on a container? I don't know myself. This worked for users of kubernetes and some others. Final suggestions I would offer are to: Not use docker containers for torrents. Only until fixed anyway. I am probably going this route unless I can get promiscuous mode going, and it works.
  14. Do go with Intel. I started out all AMD as the pricing was irresistible, but I paid for it quickly in power consumption and compatibility issues with some HBA cards. The idle drain on those Intel boards is so low vs idle on AMD; it really eats your lunch.
  15. I got one major issue with it - price. Assuming you are good with that, I noticed one other less important issue. The specs link: http://www.supermicro.com/products/motherboard/atom/x10/a1sa7-2750f.cfm 1x PCI-E 2.0 x4 (in x8) slot <-- might be an issue as the good HBA's are 8x, but with 17 ports it also might not matter. That's it though. It is a sweet board; and if you find a bulk deal or something good on pricing, let me know please!