ultimz

Members
  • Posts

    68
  • Joined

  • Last visited

Everything posted by ultimz

  1. Update went through smoothly. Thanks to the Unraid team
  2. Just moved to this docker from the unsupported one. Thanks it seems to have worked without any issues. I did change port 1900 to 1901 to match my old docker settings. Also had to backup from v7.3.83 and do the restore on v7.5.187 which seemed to work fine as well.
  3. Sorry my bad... I was trying to access the gui through NPM and that was stopped when I stopped the array... I had to use the IP.
  4. Hi, I was about to do the upgrade to 6.12.6 but when trying to stop the array (running 6.12.4) it got stuck on "Stopping services"... now I can't get into the GUI but I have managed to get into the shell and download diagnostics which are attached. Is anyone able to help me figure out why it's stuck? I was going to reboot the server using powerdown -r but can wait. server-unraid-diagnostics-20240125-0839.zip
  5. Thanks again @yogy - seems to have worked... I'll test and post back if I pick up any issues. I have just set WEBSOCKET_ENABLED to false and removed the port mapping for 3012
  6. @yogy does the docker variable (ADMIN_TOKEN) also need to be updated?
  7. Hi, Running the latest version of Vaultwarden and I enabled the push notifications as well. I am also using Nginx Proxy Manager to expose it (with just https and nothing on 3012) I can see from the logs that I am getting these errors: [2023-11-20 12:42:12.277][rocket::server][ERROR] Upgraded websocket I/O handler failed: WebSocket protocol error: Sending after closing is not allowed Is there something I need to configure on NPM or Vaultwarden?
  8. Hi all, Hoping someone can help me with a weird issue. I use Ombi with Jellyfin users that access with their Jellyfin accounts. Recently I have had an issue with Jellyfin users who can't authenticate: fail: Ombi.Api.Api[1000] StatusCode: Unauthorized, Reason: Unauthorized, RequestUri: https://jf.(domain)/users/authenticatebyname fail: Microsoft.AspNetCore.Identity.UserManager[0] Jellyfin Login Failed Newtonsoft.Json.JsonReaderException: Unexpected character encountered while parsing value: E. Path '', line 0, position 0. at Newtonsoft.Json.JsonTextReader.ParseValue() at Newtonsoft.Json.JsonReader.ReadForType(JsonContract contract, Boolean hasConverter) at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent) at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType) at Newtonsoft.Json.JsonSerializer.Deserialize(JsonReader reader, Type objectType) at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings) at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings) at Ombi.Api.Api.Request[T](Request request, CancellationToken cancellationToken) at Ombi.Api.Jellyfin.JellyfinApi.LogIn(String username, String password, String apiKey, String baseUri) in /home/runner/work/Ombi/Ombi/src/Ombi.Api.Jellyfin/JellyfinApi.cs:line 74 at Ombi.Core.Authentication.OmbiUserManager.CheckJellyfinPasswordAsync(OmbiUser user, String password) in /home/runner/work/Ombi/Ombi/src/Ombi.Core/Authentication/OmbiUserManager.cs:line 232 warn: Ombi.Controllers.V1.TokenController[0] Local users work fine. Any ideas what the issue could be? The username and password is definitely correct.
  9. @KluthR I just wanted to say thanks for your hard work on supporting the old plugin for so long and this awesome new one. Appreciate it!
  10. SOLVED - seems tdarr_node is linked to the same logs folder. Hi, I'm picking up an error when backing up tdarr - anyone have any idea of what could be causing the file to get modified while the docker is stopped? here are the logs [03.10.2023 02:40:49][ℹ️][tdarr] Stopping tdarr... done! (took 4 seconds) [03.10.2023 02:40:53][ℹ️][tdarr] Should NOT backup external volumes, sanitizing them... [03.10.2023 02:40:53][ℹ️][tdarr] Calculated volumes to back up: /mnt/user/appdata/tdarr/logs, /mnt/user/appdata/tdarr/server, /mnt/user/appdata/tdarr/configs [03.10.2023 02:40:53][ℹ️][tdarr] Backing up tdarr... [03.10.2023 02:41:22][ℹ️][tdarr] Backup created without issues [03.10.2023 02:41:22][ℹ️][tdarr] Verifying backup... [03.10.2023 02:41:58][❌][tdarr] tar verification failed! Tar said: tar: Removing leading `/' from member names; mnt/user/appdata/tdarr/logs/Tdarr_Node_Log.txt: Mod time differs; mnt/user/appdata/tdarr/logs/Tdarr_Node_Log.txt: Size differs [03.10.2023 02:42:15][ℹ️][tdarr] Starting tdarr... (try #1) done!
  11. Hi @SimonF Thanks for assisting with this. I also passed through a USB controller to the VM... would the upgrade to 6.12.4 impacted the bind? I see this under system devices and it's not ticked to bind selected at boot for my VM to use it (but my GPU entries are ticked): [1022:145f] 0b:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Zeppelin USB 3.0 xHCI Compliant Host Controller Bus 005 Device 001 Port 5-0 ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 001 Port 6-0 ID 1d6b:0003 Linux Foundation 3.0 root hub Would ticking the entry and binding selected at boot fix my issue and get me back to where I was?
  12. Hi, I am getting the following error when I reboot my server: Event: VM Autostart disabled Subject: vfio-pci-errors Description: VM Autostart disabled due to vfio-bind error Importance: alert Please review /var/log/vfio-pci-errors The only log entry in that file is Error: Device 0000:0a:00.3 does not exist, unable to bind device Both the VMs that I have start up fine manually (1 has a GPU passed through to it)... could it be an issue with the GPU? [10de:1b82] 03:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070 Ti] (rev a1) [10de:10f0] 03:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1) I have attached the diagnostic file. Thanks in advance for the assistance! server-unraid-diagnostics-20231001-2053.zip
  13. Upgraded from 6.11.5 to 6.12.4 - was pretty seamless. Just had to reinstall 2 plugins (gpustat and NUT) and also change docker to use ipvlan as it picked up a few traces. Thanks to all the devs for their hard work. Appreciated!
  14. Sure thanks - I don't mind sharing it then (took a quick look at the file). I've filtered out just for that time period. syslog.log
  15. Thanks - found the logs. Is it safe for me to post them here? Not sure if it includes any sensitive information...
  16. Ok thanks guys and yes the syslog server was enabled before the reboot (a while back actually)... how can I share those logs?
  17. Hi, This morning I noticed that I couldn't connect to my Sonarr docker (nothing out of the ordinary in its logs) so I tried to stop it but the "server error" warning came up and it just continued running. I tried docker kill command from console and it still wouldn't stop. I then killed the Sonarr process by finding that specific PID and then it stopped but wouldn't start up. I also noticed that the mover was running for a long time and didn't ever complete so I stopped the mover process from console and then tried to restart. That stopped and then I tried to stop the array to restart the server but couldn't. It was just stuck at "Sync filesystems". The only thing that was accessing /mnt/user was some syslog processes so after a while of nothing happening I killed those. No luck stopping the array. powerdown -r also didn't help as I then lost connection to IP of unraid server but the server was still on. Ended up doing a hard shutdown. Not sure why this happened but it's doing a parity check now... but everything seems to be back up. Can anyone tell me why this happened? Logs attached. server-unraid-diagnostics-20230824-0805.zip
  18. I made the jump from 7.1.68 to 7.3.83. Only issue I had was an error when starting the container... it complained about invalid memory heap size. I had to set the MEM_LIMIT and MEM_STARTUP back to default and it worked again.
  19. @KluthR yes I can start the container manually without any issues. Auto update is not enabled.
  20. Hi @KluthR I have installed the plugin again (over the old one) but I am still getting errors when trying to start a certain docker container. linuxserver/unifi-controller Any idea what I could try to solve this? For now I have set the backup to not stop this container and have also excluded its data folder.
  21. Thanks to @m33ts4k0z for finding a solution for this issue and to @Squid for the plugin! My docker containers are no longer showing as "not available".
  22. I successfully upgraded from 6.10 to 6.11.5 and everything went smoothly. Thanks to all the Unraid devs - you guys rock!
  23. Thanks I have installed the old version again. Will keep an eye out here to see when issues have been resolved and install again (but I will be able to test new version if required - just let me know!)