loukaniko85

Members
  • Posts

    36
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

loukaniko85's Achievements

Noob

Noob (1/14)

8

Reputation

  1. We're in agreement about the fact that unraid shouldn't be exposed directly - unsure why the remark. Perhaps you misread my statement or misunderstood my initial statement about having a default position of X shouldn't be exposed to the internet, not being a current one. As I stated in my post, and what you also outlined in yours are various ways of exposing unraid/ui to the internet; via a reverse proxy, in my case - and via a VPN, in yours. You're simply proving my premise that even services not designed to be exposed directly, such as unraid/ui in this case, can still be exposed indirectly, securely. Thus the position of X shouldn't be exposed to the internet, is no longer the status quo as there are many ways of exposing services, securely - irrespective of their design. Your two statements seem like a contradiction here. Though, I wouldn't expose my service with the unraid connect method; they require you to port forward directly to your unraid machine. Though, only https is exposed here, not any other part of the OS. Its not just the root credentials/2fa thats required with their method- you authenticate with your forum credentials, then the 2nd factor - then once successful, you're at your unraid/ui landing page.. now its time for your local root credentials. Though again, I dont and wouldnt use this method. VPN is a good option in general. Though if you look at the OP (below) the initial query was specifically about not using VPNs and implementing 2FA. Hence my statement about using a reverse proxy, which can introduce 2FA to all your services. Whether you expose it on the internet, or simply use it internally. With all that said, I prefer using a reverse proxy to expose internal services, as this can provide the same level of secure access. They also support 2FA. They also protect the internal services from attack.
  2. Externally, I'm using Traefik, via Authentik to expose all my services including the Unraid UI. Have SSO and multiple MFA options implemented; OTP, WebAuthn and Duo Push. All services are routed through middlewares like crowdsec and modsecurity WAF. 443 to Traefik is the only port open. Internally, I'm using Traefik, via Authelia with SSO and multiple MFA options implemented; OTP and Duo Push; have network segregation with multiple vlans, but one specifically for sensitive workloads, including unraid and a backup NAS; with firewall policies which denies incoming traffic to the vlan, with very few ports being exposed internally to other vlans, and all whitelisted endpoints. Vlan has no inbound internet connectivity (return traffic initiated from the vlan is allowed) and it isnt allocated to any wifi ssid. There's only 1 ethernet port in my house that has access to this vlan. Completely overkill, but why not If you want security, you can implement it. Also the default position of X shouldn't be exposed to the internet is invalid. It's all contextual. Of course, don't simply port forward your unraid ui/port externally. As everyone has stated unraid wasn't designed to be directly, externally visible. Keep in mind, the unraid team have implemented their own plugin, to allow 2FA remote access to the dashboard.
  3. hey mate - some assistance. I have the new 0.10 RC release. added my plex account, its enabled and validated.. and in the logs I can see its successful and that the right number of servers I have access to have been returned.. though in the GUI, it states, theres no servers available. Bug, or have I missed something? Cheers. -- Seems like the issue is due to using a custom plex server url..
  4. Do your VMs actually turn on? Mine are visible via the portal - but passthrough of devices aren't happening - no display, lights on key/mouse, etc. Even though portal says the VM is on.
  5. How can I ensure the xorg.conf file doesnt get recreated? I am trying to utilise a custom refresh rate. Or is there an alternative method? Thanks.
  6. Froze again - attached is the syslog logs. syslog.log Have just tried this.. changing docker custom network from macvlan to ipvlan. and this.. changing vm network adapters from virtio to virto-net
  7. I have, I altered my bios accordingly per recommendation in this thread, when rc1 was released. Will setup a syslog service and post logs shortly
  8. Please help me discern the issue. jj-diagnostics-20220511-1657.zip
  9. This. Will 6.10 final be utilizing Slackware 15?
  10. enabled custom CSM in my bios Launch Storage OpROM Policy - UEFI only Launch Video OpROM Policy - Legacy only All working thanks gents.
  11. I disabled csm and now vms can turn on without that error - though no display and vm log show 2021-08-09T02:16:33.519687Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1df, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519692Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1e0, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519703Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1e1, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519714Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1e2, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519720Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1e3, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519726Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1e4, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519732Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1e5, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519738Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1e6, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519743Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1e7, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519748Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1e8, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519753Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1e9, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519758Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1ea, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519763Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1eb, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519768Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1ec, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519773Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1ed, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519779Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1ee, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519784Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1ef, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519789Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1f0, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519795Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1f1, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519800Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1f2, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519805Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1f3, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519810Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1f4, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519815Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1f5, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519820Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1f6, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519825Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1f7, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519831Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1f8, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519836Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1f9, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519841Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1fa, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519846Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1fb, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519851Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1fc, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519856Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1fd, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519861Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1fe, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519866Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd1ff, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519871Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd200, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519876Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd201, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519882Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd202, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519887Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd203, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519892Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd204, 0x0,1) failed: Device or resource busy 2021-08-09T02:16:33.519897Z qemu-system-x86_64: vfio_region_write(0000:2e:00.0:region1+0xbd205, 0x0,1) failed: Device or resource busy
  12. sadly no, same issue; though i do have 2 nvmes; 1 passthrough and 1 cache.. perhaps that is why it is showing as being loaded. Also, am having same error on other vms, which do not have the nvme passthrough.. - seems to only occur with gpu passthrough. nvme passes, usb controller passes..
  13. thanks; hopefully someone can assist soon.. or ill have to rollback. I need the vms jj-diagnostics-20210808-1624.zip
  14. jj-diagnostics-20210808-1624.zip This seems to be the issue; ErrorWarningSystem Loading config from /boot/config/vfio-pci.cfg BIND=0000:23:00.0|1987:5016 0000:2e:00.0|10de:2484 0000:2e:00.1|10de:228b --- Processing 0000:23:00.0 1987:5016 Vendor:Device 1987:5016 found at 0000:23:00.0 IOMMU group members (sans bridges): /sys/bus/pci/devices/0000:23:00.0/iommu_group/devices/0000:23:00.0 Binding... chown: cannot access '/dev/vfio/25': No such file or directory Error: unable to adjust group ownership of /dev/vfio/25 --- Processing 0000:2e:00.0 10de:2484 Vendor:Device 10de:2484 found at 0000:2e:00.0 IOMMU group members (sans bridges): /sys/bus/pci/devices/0000:2e:00.0/iommu_group/devices/0000:2e:00.0 /sys/bus/pci/devices/0000:2e:00.0/iommu_group/devices/0000:2e:00.1 Binding... chown: cannot access '/dev/vfio/32': No such file or directory Error: unable to adjust group ownership of /dev/vfio/32 --- Processing 0000:2e:00.1 10de:228b Vendor:Device 10de:228b found at 0000:2e:00.1 IOMMU group members (sans bridges): /sys/bus/pci/devices/0000:2e:00.1/iommu_group/devices/0000:2e:00.0 /sys/bus/pci/devices/0000:2e:00.1/iommu_group/devices/0000:2e:00.1 Binding... 0000:2e:00.0 already bound to vfio-pci 0000:2e:00.1 already bound to vfio-pci chown: cannot access '/dev/vfio/32': No such file or directory Error: unable to adjust group ownership of /dev/vfio/32 --- vfio-pci binding complete Devices listed in /sys/bus/pci/drivers/vfio-pci: lrwxrwxrwx 1 root root 0 Aug 8 08:42 0000:23:00.0 -> ../../../../devices/pci0000:00/0000:00:01.2/0000:20:00.0/0000:21:01.0/0000:23:00.0 lrwxrwxrwx 1 root root 0 Aug 8 08:42 0000:2e:00.0 -> ../../../../devices/pci0000:00/0000:00:03.1/0000:2e:00.0 lrwxrwxrwx 1 root root 0 Aug 8 08:42 0000:2e:00.1 -> ../../../../devices/pci0000:00/0000:00:03.1/0000:2e:00.1 ls -l /dev/vfio/ ls: cannot access '/dev/vfio/': No such file or directory