AceRimmer

Members
  • Posts

    61
  • Joined

  • Last visited

About AceRimmer

  • Birthday 07/07/1988

Converted

  • Gender
    Male
  • URL
    google banned me :D
  • Location
    Ireland or Turkey ... depends on the time of year
  • ICQ
    I miss it, but i miss IRC more
  • AIM
    I miss it, but i miss MSN Messenger more
  • YIM
    i don't miss Yahoo Messenger...
  • MSN Messenger
    I miss it, but i don't miss dial up just to chat with someone
  • Personal Text
    If you can't install Windows 95 with a floppy boot disk then i have no time for you

Recent Profile Visitors

582 profile views

AceRimmer's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. No worries, I'll leave them up for a few days, hopefully someone will stumble upon them and if not I'll post elsewhere. Thanks for your help
  2. i've zipped all the logs in that folder and attached. Whats jumping out at me is the following error from the catalina.out log. I would be happy to try trial and error it but i don't know where to start with increasing the client timeouts or adding "autoreconnect=true" to the configuration. Do i need to log into the SQL server to make those changes or can they be made from the docker parameters? <4>Execution of ping query 'SELECT 1' failed: The last packet successfully received from the server was 20,713,706 milliseconds ago. The last packet sent successfully to the server was 20,713,706 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. log.zip
  3. So I jut tried to log in and work and it failed. The usual error. This is on today's docker log. User UID: 99 User GID: 100 ---------------------- Using existing properties file. Using existing MySQL extension. Using existing TOTP extension. No permissions changes needed. Database exists. Database upgrade not needed. 2021-08-30 09:07:15,315 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing 2021-08-30 09:07:15,322 INFO Set uid to user 0 succeeded 2021-08-30 09:07:15,362 INFO supervisord started with pid 28 2021-08-30 09:07:16,363 INFO spawned: 'guacd' with pid 31 2021-08-30 09:07:16,364 INFO spawned: 'mariadb' with pid 32 2021-08-30 09:07:16,365 INFO spawned: 'tomcat9' with pid 33 guacd[31]: INFO: Guacamole proxy daemon (guacd) version 1.3.0 started guacd[31]: INFO: Listening on host 0.0.0.0, port 4822 2021-08-30 09:07:17,820 INFO success: guacd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2021-08-30 09:07:17,820 INFO success: mariadb entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2021-08-30 09:07:17,821 INFO success: tomcat9 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
  4. I don't think 2fa is the problem because I get that error when it's turned off as well as on.
  5. I've only been using it for about a week so I've nothing to compare to. I'm not using an external db though.
  6. I have an ongoing issue with Apache Guacamole when logging in at work. I enter my username & password without any issues, then i enter my 2fa code and get presented with the attached error screenshot. When I turn off 2fa for Guacamole and log in i also receive the same error. The only way i can log in is to VPN into the server from my mobile, create a new user in Guacamole with 2fa turned on, then log in with the new user credentials on my work pc, scan barcode to google auth app to set it up, then i have access to guacamole. Once i navigate away from the Apache web interface and navigate back i get the above error once again and i need to delete the account over the vpn & repeat the new account setup to gain access again. Any advice?
  7. I'll have a look at unassigned devices. Its more so for other USB devices im looking for this functionality, not the Unraid flash drive. Cheers for that
  8. Not sure if something like this exists already but on my old ReadyNAS there was a feature to automatically back up any device plugged into the front USB ports on the NAS. I was able to plug in my SD card reader with the SD card from my camera and it would auto backup to the NAS. Anyone know of something like this for Unraid with the feature to specifiy a USB port to watch for new devices?
  9. I am not using wireguard to connect to the virtual machine. I am only using DuckDNS. My virtual machine is forwarding to an outside port which points to my DuckDNS URL and this is how I login to home assistant outside of my LAN. It works perfectly. If you search on YouTube for "DuckDNS Letsencrypt Home Assistant" you can see how it is done.
  10. Thank you for your help. The issue was resolved by doing as you suggested. Much appreciated
  11. Not really sure where to start with this. I've uploaded the latest diag zip from the flash drive if anyone is feeling up to it. [UPDATE] Diagnostics Zip attachment deleted
  12. Every time i reboot the machine it starts a parity check - Version: 6.9.0-rc2 I'm going to the dashboard and clicking the "reboot the system" icon. I'm not going to Main / Array Operation / Stop before rebooting... should i be doing this? I don't remember having to do this when i ran the stable release. So i am at a dilemma of do i allow it to parity check and tax my drives (the fact a parity check was triggered means an error flagged somewhere i guess) but the parity checks always come back without any errors. Any advice would be greatly appreciated.
  13. Forgive me if my knowledge of this subject isn't great. I am only about 7 or 8 months into using Unraid. So this is stemming from my Nvidia 2070 Super GPU in my rig and its passed through to a Windows VM. GPU works perfectly passed through but i have no control over the LED's on the device. Windows will not detect them and i'm left with a glowing rainbow in my room 24/7. So in relation to the GPU I have its IOMMU group stubbed on the VFIO file (see below), however on booting Unraid this GPU is still outputting video and its not until the VM manager wakes after boot up that Unraid passes the GPU to the VM. IOMMU group 25:[10de:1e84] 0a:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2070 SUPER] (rev a1) [10de:10f8] 0a:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1) [10de:1ad8] 0a:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1) This controller is bound to vfio, connected USB devices are not visible. [10de:1ad9] 0a:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1) Regardless of the stubbing Unraid still sees & recognizes this device as a "passthroughable" GPU for a VM the same way it would see it if i didn't have it stubbed (see below image). This is leading me to think there are 2 possible scenarios happening. Since Unraid has control of the GPU during boot (it outputs to the display) and it still lists it as an available graphics card (as below) it hasn't fully stubbed the device and that is why i can't control the LED's on the device. Unraid has held on to the LED controller but The LED controller isn't listing in Unraid or on the IOMMU group because there is no driver for it on Unraid (the same way folk have issues with bluetooth adaptors not appearing in Unraid before there was any driver support). So I have a few questions you may be able to answer: Is there any point of stubbing the GPU since Unraid still sees and recognizes the device as a "passthroughable" GPU the same way it would if i didn't stub it? Is it possible to stub an entire PCI lane to a VM? Is it safe to stub manually (as in physically typing it into the VFIO config file on the flash drive) since many of the PCI related bridges can't be selected in the VFIO bind list in system devices. Is the system likely doing that for a good reason (power management for starters or messing with the north/south bridge's control over a device?) Pending that stubbing a PCI lane can be done how do i tell what IOMMU group PCI lane my GPU is attached to? Below are the IOMMU groups i can't select to VFIO bind but i also have no reference as to what their purpose is on the motherboard - REFER to below FIG-A Finally just for my curiosity i'm still left with some remaining devices on my IOMMU list that i can stub but is there a way to tell what these devices actually are? Obviously i can identify the spare NIC and encrypt controller but the remaining devices i don't have any reference as to what they do. Should i be sending them to my main Windows VM. Am i loosing out by not doing so, example some aided hashing functionality on Windows by not passing through the Cryptographic Coprocessor or does Unraid actually use this but it just lists as being "stubbable" anyway? REFER to below FIG-B FIG-A IOMMU group 0: [1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 1: [1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 2: [1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 3: [1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 4: [1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 5: [1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge IOMMU group 6: [1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 7: [1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 8: [1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 9: [1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 10: [1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge IOMMU group 11: [1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] IOMMU group 15: [1022:57ad] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream IOMMU group 16: [1022:57a3] 03:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 17: [1022:57a3] 03:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge IOMMU group 18: [1022:57a3] 03:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge FIB-B IOMMU group 12:[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 24:[8086:1539] 06:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) IOMMU group 26:[1022:148a] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function IOMMU group 27:[1022:1485] 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP IOMMU group 28:[1022:1486] 0c:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
  14. I'm going to mark this topic as solved. If anyone ever finds this post and is wondering what my solution was it was to use DuckDNS + LetsEncrypt on the Home Assistant VM. I can access the HA server with the DuckDNS url on the mobile app and it signs in over SSL.
  15. No. That doesn't fix it. I can reach my Docker containers just fine. Sonarr, Lidarr, Radarr, Emby etc no problem on either VPN type below remote tunneled access remote access to lan But i can't ping my Home Assistant Core VM server from my mobile over either of the above VPN types. On wireguard on the right side my screen is the below message: So my Dockers are getting around this with the Unraid Docker "macvlan" network but is there a work around or a "macvlan" style of config for a VM?