Bureaucromancer

Members
  • Posts

    38
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Bureaucromancer's Achievements

Newbie

Newbie (1/14)

3

Reputation

  1. Ok, I've done some troubleshooting on this already, but in short after upgrading to 6.9 all drives attached to my sata card disappear. Reverting (and fixing the cache config) restores functionality. At this point I've tried playing with the IOMMU setting in syslinux.cfg, which was set to PT originally, but have found only that setting it to ON breaks the card on 6.8. This setting has no effect on the behavior on 6.9. IOMMU groups are: -its the Marvell controller giving issues, but bear in mind that it's not reporting as bound to VFIO, disappearing in its own right or causing errors in the log under 6.9 IOMMU group 0: [8086:0100] 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) IOMMU group 1: [8086:0101] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) [1b4b:9230] 01:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11) IOMMU group 2: [8086:0102] 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) IOMMU group 3: [8086:1e31] 00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04) IOMMU group 4: [8086:1e3a] 00:16.0 Communication controller: Intel Corporation 7 Series/C216 Chipset Family MEI Controller #1 (rev 04) IOMMU group 5: [8086:1e2d] 00:1a.0 USB controller: Intel Corporation 7 Series/C216 Chipset Family USB Enhanced Host Controller #2 (rev 04) IOMMU group 6: [8086:1e20] 00:1b.0 Audio device: Intel Corporation 7 Series/C216 Chipset Family High Definition Audio Controller (rev 04) IOMMU group 7: [8086:1e10] 00:1c.0 PCI bridge: Intel Corporation 7 Series/C216 Chipset Family PCI Express Root Port 1 (rev c4) IOMMU group 8: [8086:1e18] 00:1c.4 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 5 (rev c4) IOMMU group 9: [8086:1e26] 00:1d.0 USB controller: Intel Corporation 7 Series/C216 Chipset Family USB Enhanced Host Controller #1 (rev 04) IOMMU group 10: [8086:244e] 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a4) IOMMU group 11: [8086:1e49] 00:1f.0 ISA bridge: Intel Corporation B75 Express Chipset LPC Controller (rev 04) [8086:1e00] 00:1f.2 IDE interface: Intel Corporation 7 Series/C210 Series Chipset Family 4-port SATA Controller [IDE mode] (rev 04) [8086:1e22] 00:1f.3 SMBus: Intel Corporation 7 Series/C216 Chipset Family SMBus Controller (rev 04) [8086:1e08] 00:1f.5 IDE interface: Intel Corporation 7 Series/C210 Series Chipset Family 2-port SATA Controller [IDE mode] (rev 04) IOMMU group 12: [10ec:8168] 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) My working syslinux.cfg is: default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append initrd=/bzroot iommu=pt kvm-amd.npt=0 kvm.ignore_msrs=1 label unRAID OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label unRAID OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest
  2. I'm getting "Connection Refused" with Bridge type.
  3. Is there a working client config guide anywhere? Link is to the forum home page.... Edit: https://web.archive.org/web/20160807091818/http://lime-technology.com/forum/index.php?topic=19439.0 Very old, but it does still work if you have an idea what you're looking at. Short version is that the one click install works, and ovpn files go in an openvpn directory you will need to create in the root of the flash drive.
  4. Is there any way to change the VNC timeouts for either this docker or unraid as a whole? I've never had problems with VMs, but it would be immensely useful if I could at least not time out my session on Krusder (ideally I'd like to just resume it)...
  5. Ok, I'm lost again. Had my reverse proxy (no external access) working for custom server names, but suddenly I'm getting connection refused errors from everything that should go through NGINX... I reinstalled the NGINX docker completly, and the test page works as expected, but my config file is either ignoring the port I'm redirecting to (or more likely as I see nothing in logs) not working at all. I've cut my default file down to a single docker for the moment, but can anyone see why this server entry would get me connection refused having previously been working? The plex server is working properly on 192.168.0.201:32400, or on plex.hda.home:32400 (which for the moment is from a local hosts file, but that will be addressed once I get a new router) so SOMETHING seems to be up with NGINX, but I havne't changed anything... server { listen 80; server_name plex.hda.home; location / { proxy_pass http://192.168.0.201:32400/; } } PS: And of course it worked the moment I posted this, having done nothing differently... Resolved I guess, but I'll leave this for the moment lest anyone have ideas. It's a stupidly simple config I know, but none of this is exposed externally except through a VPN, and the custom DNS is, like I said, entirely hosts based for the moment (I had dnsmasq working on my router, but for the moment I'm having to connect through a non-optional ISP device with no such abilities, looking at solutions, but frankly hosts works for me).
  6. Ok, bumping for fix... privoxy was off... not entirely clear on the relationship between OVPN and privoxy, or why one of my or the required config changed (although from my limited understanding of privoxy it was always there and probably got turned off) but its working now (including confirmed that my ip is filtered).
  7. Still stumped here... Completely removed app, reinstalled and recreated settings manually. Still refuses all connections. Tried installing the bare transmission, it's working as expected... Something's different between transmission and this version that's broken something, but I'm really lost now. Argh. Tried deleting my docker image in desperation. No change. Further update: I was able to get into my web interface by turning off the vpn_enabled flag in the docker, so I guess we now know that I'm dealing with a vpn issue rather than a transmission one, but does anyone know what might be happening here? To be honest I'm not even sure how to verify if the docker is connecting to my VPN correctly without being able to access the Transmission Web GUI.
  8. As of today any attempt to connect to my install that has been working for well over a year gets me a connection refused error... seems like more or less the same issue, but it doesn't seem to give me anything helpful here... Any ideas out there?
  9. So this is probably a bad idea, but I do want to ask... Right now I've got an NGINX docker running a reverse proxy for the sole purpose of giving proper urls with subdomains to my apps, eg: plex.hda.home, transmission.hda.home (HDA being the server name, weird, but got very used to it when I used to typing that while I ran Amahi). With unraid now running it's own web interface on NGINX is there any realistic way to put my configuration into the instance of NGINX that's running the interface and avoid the duplication?
  10. Yeah, I've tried with and without pinning, ignore msrs is on and iommu=pt. For the moment I'm going to do some experiments on bare metal and see what I get, there's definitely something up with the cores and I'd like, among other things, rule out a bad cpu.
  11. It definitely can be patched, but for the moment acs override with the multi function option seems to get everything seperated.
  12. And in further weirdness it seems in deleting my pinning I accidently set the vm to a single core and that it works with such. It does not with the full 12 cores I usually have, experimenting now to see if I can track a hard limit down. Another edit: what I thought was a hard lock to 100% cpu may not actually be such. I'm letting it sit witht he pegged cores at the moment on the full cpu set I want, having found that the time to get through the windows splash screen drastically increased every time I increased the core count. The weird thing is that the progres spinner is moving now and then, so it looks like SOMETHING is happening behind all that CPU activity and its somehow related to number of cores in play. I almost wonder if it's something odd about handling of the CCX units on Ryzen now?
  13. Interesting. Only bits different are apparently kvm hidden state and that I'm trying to use i440fx. Tried combinations of those with no changes... Edit: I take it back. I forgot to turn off pinning in the above, Q35 without pinning seems to have got things going. i44fx doesn't, and neither does with CPUs pinned. Weird. Also amusing to see the system try to sort out cpu time with 100% locked cores that aren't pinned (and I'd say a bloody miracle that it didn't lock up given that everything else under the sun seems to crash my system right now .)
  14. I did find a post referencing this on the iommu list, but no replies, so right there with you.
  15. I don't have much experience mucking about with kernel parameters, any particular order my appends should go in syslinux? My first attempt to use kvm-amd.npt=0 just led to vms eating 100% of the assigned cores and never booting. Any in depth discussions of this you know of? I've found one on the redhat forums, that ends up suggesting following the iommu mailing list, but are there any other things I should be following? Edit: ok, so quick and dirty fix was to switch to QEMU (from host-passthrough) with NPT off, suddenly the GPU is used properly, but of course my CPU performance has gone to hell. NPT is obviously going to have some loss, but is there anything much to optimize performance with QEMU or get host-passthrough workign without NPT?