slushieken

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by slushieken

  1. Ok this is strange but I think if you are just patient, the GUI can connect again. I couldn't get my browser to connect to the GUI port 8080 either after a very recent reboot (few days ago). My guess is it was related to an upgrade I just completed to 6.12.8, for which I rebooted. It seemed to be working fine until that upgrade and reboot. I had stopped the container until I could troubleshoot better. I rebooted this AM a couple of times, noticed I still could not connect, and I started gathering the docker logs to post here. Then I thought I would run a TCP packet capture to assess whether it was networking or the application itself. After about 10 mins time (while I was writing this post in fact) I notice the docker log suddenly updated with a bunch of new lines, and I could then connect to the GUI port normally! I'll watch to see if I can help further, but I next need to know how to gather the supervisord.log for you. I use and am still using 'Bridge' network mode only.
  2. I am doing a fresh, new install of latest, and getting this too. I have tried at least 3 different repo's and 3 different revisions- no luck. The good news is this repo is the one that comes closest to working - only grafana is failing. The logs in the container are absolutely empty for grafana. At console I execute "grafana-server" it says "Grafana-server Init Failed: Could not find config defaults, make sure homepath command line parameter is set or working directory is homepath" Can someone kind soul out there take a look at this? There are too many building blocks required to get this stack working, and I am not familiar enough with any of them to even start troubleshooting through those. 🤕 Thank you.
  3. I drew this basic Silverstone CS351 png to populate my dashboard, since there is not an existing image for that. I used a picture of the server front as a guide, and traced the outline over that. I see others in this thread include 3 different copies for their png's. On my Unraid server (the way I use it), I didn't find a place to use those. So I searched this thread for a set of requirements, eg what is needed for a full set of images, and where those would be uploaded, but I couldn't find an explanation. Someone who understands better what is needed and why/where it goes - it would have been helpful to have that so I could create a full set. Anyway, here is what I have so far.
  4. I notice it is not 'all' of my hard drives - just 2 or sometimes 3, and it is always the same ones. Mainly it is the Toshiba 4.0TB MD04ACA400 that seems to spin up routinely.
  5. Researching for a fix I stumbled on this post. I am seeing the same behavior on 6.9.2.
  6. I am guessing by this statement and your link to to the Reddit article, you already know that Windows 8.1 is the go-to solution atm for this problem. I.E. 8.1 is harder to virtualize for your 3d. Interesting problem for sure! I wonder if you could virtualize under a different virtualization platform to solve it as well...
  7. Likely you need to use VNC - and the right VNC client as well. I remember that one VNC Client worked laggy and even froze the desktop every few minutes/seconds forcing me to disconnect and reconnect again over and over. Switching to another VNC client solved that - but then there were other problems that surfaced and the struggle was too much; so i quit trying to do that in Unraid. BTW, I was looking through your posts and your punctuation could use some work. Grunting and pointing will only get you so far.
  8. I just wanted to say this script is just... amazing. I have been working on cleaning up nearly 20TB of movies and tv shows for several years now split incorrectly over time due to this or that, settings errors on splits, etc. I have been having success over that time with many different methods and a hit and miss effort, but yours wrapped it all up in one fell swoop. Thank you.
  9. Thanks for coming back to let me know! I have not yet had the time to actually test this at the VM level for passthrough. Are there any more steps to get it going beyond that? RE: special configuration options for the VM definition? It would sure help me out, and anyone else when they come back across this if so.
  10. Step one complete: https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM Fix: Ok thats the first part done for me, I'll come back later to finish this up (or maybe someone else can). In the meantime if you have not already, make that change and reboot.
  11. I have been wanting to do this myself. I took your info to google and found: https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM There, I found the following, which is step #1 Should respond 'Y' if it is working and 'N' if not available/enabled. Of course if no, like mine, you next enter: Then you are supposed to reboot and check again. So, to summarize: 1. Did you reboot? If not, do that. 2. After reboot, re-run the 'cat /sys/module/kvm_intel/parameters/nested'. If it is still showing N, my guess is next to check around your BIOS to make sure all is enabled. I am about to test this myself here. Edit: OK - that didn't work for me... After a reboot still shows: $ cat /sys/module/kvm_intel/parameters/nested N Going to poke around the bios to make sure but I could swear I had enabled all those functions before...
  12. Lol; but seriously... there are many causes for mouse lag with VNC. If I connect with the novnc browser based default client, even when the host is right next to me, I am not able to navigate due to the extreme mouse lag. In my case I had to go through several different VNC clients before I found Chicken of the VNC worked, mostly. Once I used the native desktop tool also all problems disappeared. VNC through the hypervisor with windows or Linux guest seems a bit buggy at best... Better to to use it as a fall back and test more than one VNC client.
  13. Saarg, you are the best... that's the answer I was looking for. Tuftuf thanks to you as well... my pass through thoughts were indeed based on faulty instinct, which led me down the wrong path for the solution.
  14. I want/need to pass through the VT-d and/or IOMMU feature to my guest. It is required to be visible/enabled CPU feature for the OS I am installing for it to install - otherwise it complains and refuses to install. I work with a lot of 'demo' guest OS packages so these are not things I can alter (IE to not require that on install). In VMware if this was enabled in a few spots it would just pass on through - I know this is an apples to oranges comparison here but I was hoping for something similar in functionality... Unraid: root@Storage:~# egrep -c '(vmx|svm)' /proc/cpuinfo 8 Guest (ubuntu): shows 0 (zero) with same command. Hopefully skirting around the questions of why and choose another way; how do I pass that through? 1. Do I have to disable/unbind that device/IOMMU group first in UnRaid to do that? 2. If it must be unbound - will it cause any obvious issues? 3. Which IOMMU group would carry that? Group 0 and/or possible group 1 seem most likely. If it is 1; I am concerned I am also adding my RAID controller.... IOMMU group 0 [8086:191f] 00:00.0 Host bridge: Intel Corporation Skylake Host Bridge/DRAM Registers (rev 07) IOMMU group 1 [8086:1901] 00:01.0 PCI bridge: Intel Corporation Skylake PCIe Controller (x16) (rev 07) [9005:028c] 01:00.0 RAID bus controller: Adaptec Series 7 6G SAS/PCIe 3 (rev 01) IOMMU group 2 [8086:1912] 00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06) IOMMU group 3 [8086:a12f] 00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31) [8086:a131] 00:14.2 Signal processing controller: Intel Corporation Sunrise Point-H Thermal subsystem (rev 31) IOMMU group 4 [8086:a13a] 00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1 (rev 31) IOMMU group 5 [8086:a102] 00:17.0 SATA controller: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] (rev 31) IOMMU group 6 [8086:a169] 00:1b.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Root Port #19 (rev f1) IOMMU group 7 [8086:a16a] 00:1b.3 PCI bridge: Intel Corporation Sunrise Point-H PCI Root Port #20 (rev f1) IOMMU group 8 [8086:a112] 00:1c.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #3 (rev f1) IOMMU group 9 [8086:a114] 00:1c.4 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #5 (rev f1) IOMMU group 10 [8086:a118] 00:1d.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #9 (rev f1) IOMMU group 11 [8086:a145] 00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H LPC Controller (rev 31) [8086:a121] 00:1f.2 Memory controller: Intel Corporation Sunrise Point-H PMC (rev 31) [8086:a170] 00:1f.3 Audio device: Intel Corporation Sunrise Point-H HD Audio (rev 31) [8086:a123] 00:1f.4 SMBus: Intel Corporation Sunrise Point-H SMBus (rev 31) IOMMU group 12 [8086:15b8] 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-V (rev 31) IOMMU group 13 [1b21:1242] 02:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller 4. PCI-Stub would help here? Other alternatives? 5. What configuration changes need to be made to the guest VM config to do that? Any needed for the host os (unraid)? 6. Anything I am not thinking of here? 7. Perhaps there is a way I can fake out the Guest VM into thinking it has that when it actually does not? I would prefer to have that functionality present, but as these are demo and not prod I expect it would be acceptable even without it actually being there if I can fake it. Thanks all.
  15. Steini, I just discovered this plugin... really want to use it. But, I am at at current release: I am sure it is a pain to do this each time, but can you compile again vs. current 6.2.1?
  16. I have been receiving this error regularly for some time. unregister_netdevice: waiting for lo to become free. Usage count = 1 Much of the time, it is harmless and nothing happens. Sometimes though, everything grinds to very nearly a full stop re: networking. This is full OS affecting and I cannot connect via the GUI at all, and even when connected via SSH/Telnet command completion becomes hard as the connection goes in and out. This situation remains so, until a full reboot can be executed. Can't test restarting docker only, as connectivity becomes so problematic at that point issuing commands becomes difficult. It seems, via research, this is a cross platform kernel related issue, and shows up most commonly when using docker containers with high network activity (for example, torrents). It seems it is also intermittent, and therefore hard to reproduce except in a production environment with lots of connections. Devs indicate it seems to be related to hairpin mode networking, and if this can be disabled it stops. Hairpin mode is used by default on containers. Suggested workarounds found so far are: Disable IPV6 in the containers. hard to do as even if docker is configured to not use IPV6, it seems it still uses it anyway. Additionally, even when disabled in the kernel via boot parameters, many say it did not fix the issue. Set docker networking to promiscuous mode. This disables hairpin mode. Anyone know how to do this on a container? I don't know myself. This worked for users of kubernetes and some others. Final suggestions I would offer are to: Not use docker containers for torrents. Only until fixed anyway. I am probably going this route unless I can get promiscuous mode going, and it works.
  17. Do go with Intel. I started out all AMD as the pricing was irresistible, but I paid for it quickly in power consumption and compatibility issues with some HBA cards. The idle drain on those Intel boards is so low vs idle on AMD; it really eats your lunch.
  18. I got one major issue with it - price. Assuming you are good with that, I noticed one other less important issue. The specs link: http://www.supermicro.com/products/motherboard/atom/x10/a1sa7-2750f.cfm 1x PCI-E 2.0 x4 (in x8) slot <-- might be an issue as the good HBA's are 8x, but with 17 ports it also might not matter. That's it though. It is a sweet board; and if you find a bulk deal or something good on pricing, let me know please!
  19. My understanding is a little bit different than Garys but still similar. There are 3 major types of memory in DDR3. 1. Fully Buffered or may be called Registered. Always includes ECC as a feature. Includes a buffer chip to reduce the load of managing memory allocations, thereby increasing the amount of addressable memory available for use by the CPU. Only supported by motherboards and processor that can support this feature. 2. Unbuffered - but with ECC - No buffer chip in these. Does provide ECC however, if the motherboard AND the processor can use it 1. motherboard must be wired to get ECC signal from memory to the CPU, and not all are. 2. CPU must know how to use it. Most AMD processors do, some intel do. In previous generations Intel only supported ECC in the Xeon line. Now it is different. 3. Unbuffered or plain old memory - No ECC, no buffer chip. Advantages - #3 can be used in place of #2, but of course there is no ECC. #2 can be mixed with #3, but all ECC functions become disabled when mixed. cons - Can't use ECC memory ALONE on a motherboard that does not support it; it just plain won't boot. Add a stick of plain memory to that though and it disables ECC functions letting you use the memory.
  20. I am interested in picking up your empty chassis, assuming it is a complete chassis, not just the top piece. I am not interested in the microserver though. Are you willing to consider breaking up the set?