All Activity

This stream auto-updates

  1. Past hour
  2. Hast du denn die Advanced View auf der Docker Seite denn schon aus geamcht? Das verursachte auch immer solche spikes. Nur so als kleiner Hinweis, der Shelly plug liefert keine Echtzeitdaten weil du ihn im Video auch zeigst, die Daten die der liefert sind immer ein paar Sekunden verzögert. Sicher das nicht irgend ein container irgendwas sprunghaft macht? Tritt das auch auf wenn wirklich alle container offline sind (stelle hierzu sicher das du auch die Advanced View auf der Docker Seite deaktivierst). Mach auch noch einen Gegentest und schalte alle Container aus, dann machst du noch die Advanced View aus auf der Docker Seite und dann machst du die WebUI von Unraid mal ganz zu, wartest genau 2 Minuten und dann siehst du mal auf der Shelly nach. Hintergrund hier ist das du vermutlich vergisst das die WebUI auch kurze spikes erzeugen kann je nachdem was du alles aktiviert hast da die ja auch die ganzen Gerätedaten (Temperaturen, Speicherplatz usw.) irgendwo sammeln muss <- aber ich glaub nicht das es durch das ausgelöst wird.
  3. I would strongly recommend to always first check out the homepage/documentation from RustDesk itself. This container is based on the self hosted version from RustDesk, if you want to change ID then you have to use the paid version (which is not supported by this container). You can see this on the pricing page:
  4. Today
  5. I don't think that this is the /var/log directory from your server, it seems more likely that you are using Krusader from binhex and you are in the containers /var/log directory. I see nothing suspicious from your logs however may I ask why you are still on Unraid 6.12.6? Unraid 6.12.9 was released a few days ago.
  6. Have you read the description? Just remove the server.cfg from GAME_CONFIG and create another TCP port with the container and host port set to 40120 and click apply, afterwards you can connect to txAdmin.
  7. im facing a strange issue, if i reboot the server, my windows vm runs fine with sr-iov with my i5 14500 but if i quit the vm, my whole gpu dissapears, no gpu on device list and plugin shows no gpu found until i reboot the server i915 0000:00:02.1: [drm] *ERROR* tlb invalidation response timed out for seqno 23 ar 29 05:57:58 NAS kernel: vfio-pci 0000:00:02.1: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: Running in SR-IOV VF mode Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] GT0: GUC: interface version 0.1.4.1 Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] VT-d active for gfx access Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] Using Transparent Hugepages Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] GT0: GUC: interface version 0.1.4.1 Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: GuC firmware PRELOADED version 1.4 submission:SR-IOV VF Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: HuC firmware PRELOADED Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] Protected Xe Path (PXP) protected content support initialized Mar 29 05:57:58 NAS kernel: i915 0000:00:02.1: [drm] PMU not supported for this GPU. Mar 29 05:57:58 NAS kernel: sdd: sdd1 sdd2 sdd3 sdd4 Mar 29 05:57:58 NAS kernel: [drm] Initialized i915 1.6.0 20201103 for 0000:00:02.1 on minor 1 Mar 29 05:57:58 NAS kernel: ata6.00: Enabling discard_zeroes_data Mar 29 05:57:58 NAS kernel: sdd: sdd1 sdd2 sdd3 sdd4 Mar 29 05:57:58 NAS usb_manager: Info: rc.usb_manager vm_action Windows 11 stopped end - Mar 29 05:57:59 NAS kernel: i915 0000:00:02.1: [drm] *ERROR* tlb invalidation response timed out for seqno 23 Mar 29 05:57:59 NAS kernel: i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=none:owns=io+mem Mar 29 05:57:59 NAS kernel: i915 0000:00:02.2: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none Mar 29 05:57:59 NAS kernel: pci 0000:00:02.1: Removing from iommu group 19 Mar 29 05:57:59 NAS kernel: i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=io+mem:owns=io+mem Mar 29 05:57:59 NAS kernel: pci 0000:00:02.2: Removing from iommu group 20 Mar 29 05:58:00 NAS unassigned.devices: Disk with ID 'Samsung_SSD_860_EVO_500GB_S3Z2NB0K660578V (dev1)' is not set to auto mount. Mar 29 05:58:00 NAS unassigned.devices: Disk with ID 'Samsung_SSD_860_EVO_500GB_S3Z2NB0K660578V (dev1)' is not set to auto mount. Mar 29 05:58:00 NAS unassigned.devices: Disk with ID 'Samsung_SSD_860_EVO_500GB_S3Z2NB0K660578V (dev1)' is not set to auto mount. Mar 29 05:58:00 NAS unassigned.devices: Partition '/dev/sdd2' does not have a file system and cannot be mounted. Mar 29 05:58:01 NAS kernel: i915 0000:00:02.0: Disabled 2 VFs Mar 29 05:58:01 NAS kernel: Console: switching to colour dummy device 80x25 Mar 29 05:58:01 NAS acpid: input device has been disconnected, fd 11 Mar 29 05:58:01 NAS kernel: pci 0000:00:02.0: Removing from iommu group 0
  8. I stumbled across this entirely by accident looking for the proper shutdown command but came up with the same idea you did. I installed a cheap UPS for a customer to prevent their server from shutting down abruptly as it killed 1 drive already. Here's the code that I used chatgpt to help make, instead of pinging a device on the network I have it ping the gateway itself which works out perfectly here since their network isn't on a battery backup. If you want to use this replace the IP with your gateway and desired timings. #!/bin/bash while true; do printf "%s" "Checking local LAN gateway @ 192.168.86.1..." if ping -c 1 -n -w 1 192.168.86.1 &> /dev/null; then printf "\n%s" "gateway is responding" else printf "\n\n------------------------------------------------------------------------" printf "\n%s" "gateway is not responding, waiting 5 minutes before checking again..." printf "\n------------------------------------------------------------------------\n" # Wait for 5 minutes before checking again sleep 300 printf "\n%s\n" "Rechecking gateway..." if ping -c 1 -n -w 1 192.168.86.1 &> /dev/null; then printf "\n------------------------------------------------------------------------" printf "\n%s" "Gateway is now responding, shutdown avoided..." printf "\n------------------------------------------------------------------------\n" else printf "\n------------------------------------------------------------------------" printf "\n%s" "Gateway is still not responding after 5 minutes, shutting down..." printf "\n------------------------------------------------------------------------\n" powerdown exit 1 fi fi # If the loop reaches here, it means the IP is now responding printf "\n%s\n\n" "Checking again in 30 seconds..." sleep 30 done I also formatted it so in the logs it looks nice and pretty with good spacing for at a glance troubleshooting.
  9. Hello, recently I've been struggling with my server and I'm not sure what to do at the point I'm at now. About a day ago I ran into some issues which i thought were server related, but ended up just being chrome related. But in the process of my troubleshooting yesterday I changed some settings after scanning with Fix Common Problems. I essentially changed my settings to match this document it pointed me to: https://docs.unraid.net/unraid-os/release-notes/6.12.4/#fix-for-macvlan-call-traces. At first I followed the steps that include the "Settings > Docker > Host access to custom networks = Enabled" step but then backtracked and changed the maclan setting to ipvlan instead since I figured that'd be fine. Come earlier today, I noticed I couldn't connect to my server for longer than 30 seconds without it disconnecting, even after multiple reboots. I ran memory tests and everything seems to be fine, so I'm not sure what else to check since I'm relatively new and can't really touch anything. When hooked up to a monitor directly it seems to just sit perfectly fine at the login prompt at the CLI. I took a look at the syslog that I have and I notice there's some avahi-daemon logs which I'm not sure about. I've attached my syslog to see if this is possibly suspicious as I'm not very familiar with Avahi. Any help would be greatly appreciated as I'm a little bit worried about a breach and I'm still learning. Thank you! syslog
  10. After upgrading from 6.12.6 to 6.12.8, 1 docker containers now say Orphan Image. How does one recover these images? Many thanks.
  11. for fivem docker can we add txadmin on it ?
  12. Hello Unraid Community, I'm experiencing an issue with the USB 3.0 ports on my Gigabyte X670 AORUS ELITE AX motherboard and seeking your expertise. The ports in question are the two USB 3.0 ports located at the back, just below the USB 2.0 ports closest to the exhaust fan. This issue has led to intermittent disconnections of connected devices, affecting only the USB 3.0 ports currently, although I've noticed similar behavior with the USB 2.0 ports in the past. This is indeed a new build that has suffered from this issue since day one. (About 4 months ago) System Details: Motherboard: Gigabyte X670 AORUS ELITE AX, BIOS Version F22b (Dated 02/06/2024) CPU: AMD Ryzen 9 7950X 16-Core @ 4500 MHz RAM: Corsair Dual Channel 64 GiB DDR5 OS: Unraid 6.12.8, Kernel Linux 6.1.74-Unraid x86_64 Others: C-States Enabled, HVM Enabled, IOMMU Enabled, Network: bond0 fault-tolerance (active-backup), MTU 1500 Troubleshooting Steps Taken: Isolated Devices: Disconnected USB devices one at a time to identify if any particular device was causing the issue. The problem persisted, eliminating device-specific faults. BIOS Update: Updated the BIOS to the latest version (F22b). However, it's worth noting that the F22b version does not introduce any changes from the F22 version, as confirmed by Gigabyte. The persistent nature of this issue across different types of USB ports suggests a more systemic problem, possibly related to the motherboard's USB controller or its interaction with Unraid. I'm attaching my diagnostics file for a more detailed analysis but would appreciate any insights or recommendations from the community. Could this be a known issue with a potential workaround, or are there any settings within Unraid or the BIOS that I might have overlooked? Thank you in advance for your help and support. jasmasiserver-diagnostics-20240329-0022.zip
  13. Can you recommend one that has a several ports on it that I can purchase on Amazon. Like I said, I really was interested in the LSI 9300 and came super close to ordering it but when I started reading how hot it got and you need to mount a fan, etc, I decided against it because of the chance of ruining the motherboard or other heat related issues. I have good cooling in the system where the CPU is 32 and the drives temps are 34 to 43 but I just don't want to chance it. If you or anyone can recommend a controller that will give me 8 or more connections (I have 12 total drives including 10 SATA and 2 SSD).
  14. ok, looking like a issue with decoding the video may try the following, stop the container, delete the codec folder located in your appdata plex subsection sample location \appdata\PlexMediaServer\Library\Application Support\Plex Media Server then you are actually in the wrong Thread here ... this is the plex linuxserver Thread ... not the plexinc one well ... so on your windows Server you where using a GPU, did you look into it if its actually transcoding using your roku ? and about adding it to plex on unraid, there is a manual on the nvidia plugin page ... page 1, incl. pictures also there, did you check if its actually using it ? may rather look at the plexinc thread as this is the wrong place actually
  15. I experienced this/similar behaviour also after upgrading to 6.12.9. Upgraded: 3 drives in my pool "unavailable" (pretty sure it is the three that are logical drives above 32) so pool wont import and calamity ensues... downgraded to 6.12.8: instantly works properly. Fwiw, I'm running a hybrid zfs pool (leftover from my old baremetal ubuntu setup, works fine with no issues other than UI and a forced "zpool import" on boot). And this is the 10port sata controller details (with 9 drives connected) output from "lshw -class storage -class disk" *-sata description: SATA controller product: ASM1166 Serial ATA Controller vendor: ASMedia Technology Inc. physical id: 0 bus info: pci@0000:01:00.0 logical name: scsi5 logical name: scsi6 logical name: scsi7 logical name: scsi8 logical name: scsi9 logical name: scsi10 logical name: scsi34 logical name: scsi35 logical name: scsi36 version: 02 width: 32 bits clock: 33MHz capabilities: sata pm msi pciexpress ahci_1.0 bus_master cap_list rom emulated configuration: driver=ahci latency=0 resources: irq:129 memory:d1182000-d1183fff memory:d1180000-d1181fff memory:d1100000-d117ffff
  16. I have been trying for the past 3 days with no luck. See my post on r/Unraid: https://www.reddit.com/r/unRAID/s/0YZxsjX71B
  17. Apropos of nothing (as far as I know), my Parity 2 disk had a big red X and was disabled a day or two ago. I ran the short, and the long SMART tests and both completed without error, so as far as I know the disk is okay? How can I reenable the disk? Attaching diagnostics. Not sure how to attach the SMART log. Thanks. Edit: Found this - https://docs.unraid.net/unraid-os/manual/storage-management/#rebuilding-a-drive-onto-itself Working through that now. rick-diagnostics-20240328-2043.zip
  18. nicht wirklich ... ich sehe auch keine cpu spikes ... hier ein recht aktuelles Beispiel, das sieht man dann auch in htop ... ja, ich sehe dass die shelly app immer mal wieder hoch geht ... aber nichts was von unraid Seite als Prozess dem entsprechen könnte ... entweder so schnell und kurz dass dies in htop übersprungen wird (unwahrscheinlich) oder eher ein Hardware Thema ... was passiert wenn du das array aus schaltest, wäre jetzt der letzte Test ...
  19. Hello I am having some problems with my server and I am not what could be the problem with it. I will be attaching the diagnostics file. Sorry there is not a lot of information. Thanks. Please let me know if you need anything else to help figure it out. syslog.txt
  20. ich nutze NPM nicht ... ich bin swag user ... aber ja, wird wohl das Gleiche sein klar gibt es den nicht manuell ... wobei in NPM macht man das über die GUI unter "extra" wenn ich mich recht erinnere ...
  21. sorry for lack of info, To follow up, I have a video file that is an MKV, FIle attached shows info on the file. Windows plex server shows version 4.127.1 from the General screen and is running a ryzen 2600 proc with 16GB of ram Unraid server running plex shows version from the General screen is 1.40.1.8227, Running from plexinc/pms-docker. Computer is Ryzen 3900x with 32Gb of ram and Server is up to date. If I try to play the attached file from Plex running off the windows pro computer though the Roku, the video will play and there are no issues, along with playing it though the web browser If I try to play the video from the Unraid Plex server, it comes up to a screen where the file gets to about 33% and stays there when trying to view the file though the Roku . It shows buffering though the web app when I look to see what is going on. if I try to use the web browser it just goes to a black screen and does not progress I did add in a video card to see if it made a difference and it did not. Video card was in the windows plex server for transcoding, it is Nvidia RTX A2000. I followed a video on getting the video card added into the unraid server from Spaceinvader one, how to use GPU transcoding on Emby or Plex container. I few things were different and we ended using using the Nvideia-Driver, ich777's Repository from the app store. did not make a difference.
  22. I have 6 drives in my array (2 as parity) Last night I started getting multiple errors like so This was the SMART on that drive today before replacing the drive I ordered a new Drive to replace it and it came in. I turned off the machine. Unplugged that drive, replaced it with an equivalent size and turned the machine back on. I then selected that drive to replace it in the array and hit start. It said it would take about 2 days to repopulate the data on that drive and then I started getting these errors: I panicked and stopped the array. Then I saw the uploaded image attached where multiple drives are stated as missing. What do I do. I have Loads of data I do not wanna lose (hence the 2 parity drives). Someone help
  23. The GUI has become unresponsive at least twice more in the last month. Not sure if it's the same problem, but would like to know if there's something obvious that I should be doing. Still on 6.12.6. This resulted in unclean shutdowns each time. Diagnostics and syslog are attached. Syslog is lightly redacted; hope I haven't removed anything that's pertinent. (\\IP address\Tower_repo is a Tailscale connection.) This time the GUI got stuck while updating the DuckDNS container - I saw some error message (unfortunately can't remember what now) and then the GUI gradually froze - various tabs became unresponsive. tower-diagnostics-20240329-1010.zip syslog-192.168.1.14.log
  24. using rustdesk-aio, trying to change client id. But it said "Not yet supporteted by the seerver", any idea?
  1. Load more activity