bwnautilus

Members
  • Posts

    79
  • Joined

  • Last visited

Everything posted by bwnautilus

  1. Thanks for looking into this. My Unraid system that's experiencing this problem is Xeon-based and I do not see any CPU spikes when running 'sensors -A'. But as I mentioned previously, I'm back on 6.11.5 - don't want to do the upgrade/downgrade thing again. Glad the solution worked for you.
  2. To add a datapoint to this thread, I decided to roll back to 6.11.5 and wait for a solution to this problem. Running the shell commands from @CiscoCoreX here's what I see when writing to the flash drive: The one thing in common is that I also have a Kingston DataTraveler 3.0 8GB pen drive. My other unraid server is running 6.12.6 but has a Kingston DataTraveler 2.0 pen drive and does not exhibit the slow GUI problem. I'm wondering if there is an underlying USB linux driver problem with the Kingston DataTraveler 3.0 series that is only manifesting in 6.12.6. Thoughts anyone?
  3. Not the cause for me. After upgrading to 6.12.6 and noticing this slow GUI response on dashboard, I disabled VMs and Docker services, rebooted. Problem was still there. I even rebooted without starting the array, same problem. 🤷‍♂️
  4. I'm also experiencing the same problem with 6.12.6 after recently upgrading from 6.11.5. Currently rolled back to 6.11.5 waiting for a resolution to this.
  5. @ljm42 Thanks for your suggestions. After rolling back to 6.11.5 (GUI was back to normal) I changed the Docker settings to ipvlan and removed Connect. Downloaded 6.12.6 and rebooted. With the array not started, the GUI is still unresponsive on Dashboard and Plugins tabs. I will roll back to 6.11.5 again and wait for an updated Unraid release.
  6. This is especially noticeable when selecting the Dashboard, Plugins or Docker tabs. Sometimes it takes up to 45sec to render the page. On the Dashboard tab there are always at least 3 CPUs that are pegged at 100%. The page will finish rendering when the CPUs go back to normal load. I also notice this process in htop that pops to the top when the page is rendering: /usr/local/bin/unraid-api/unraid-api /snapshot/api/dist/unraid-api.cjs start I will be rolling back to 6.11.5. Diags attached. Thanks in advance. mediatower-diagnostics-20231203-1452.zip
  7. I have two Unraid servers both running the Connect plugin and both at the same unraid release version (6.11.5). One server (Tower) does not show my unraid.net account photo: My other server (MediaTower) shows my unraid.net photo correctly: Except for the certs, the Connect settings are the same for both servers. I've tried re-installing Connect and running "unraid-api restart" on the Tower server. No luck. Obviously this is not a critical problem but I'm wondering how I can get my account photo to show up on my Tower server. Thanks in advance.
  8. @binhex Hi, thanks for all your efforts maintaining this PyCharm docker. I have a question about it. I normally do all my python development in a virtual environment like conda. I've downloaded the anaconda installer and tried to run it inside the PyCharm container. It fails because of missing libraries. Is there any way to add the anaconda venv to this docker? Thanks!
  9. After upgrading to 6.11.5 I noticed a CPU usage spike that wasn't in 6.10. Every 30s or so a CPU core would spike to close to 100% for a second and then go back to normal. top indicated that a kworker thread was the culprit. After a lot of googling I found a command to turn off kworker thread polling: echo N> /sys/module/drm_kms_helper/parameters/poll This stopped the 30s CPU spiking. My question for the Linux gurus on this forum is this: will turning off kworker polling like this affect the operation of UNRAID in any way? Thanks in advance.
  10. Have you looked at Spaceinvader One's video on how to fix a corrupt Plex DB?
  11. It does indeed. Thanks. Whole process took about 10min backing up to Synology. 47GB tar file!
  12. Thanks!. CA Backup installed. One other question: do I need to disable docker before backing up the appdata directory? I know Plex does periodic updates to its database.
  13. @JorgeB I'm on 6.10.3 and would like to upgrade to 6.11.3 for the improvement in SMB performance. However the one docker that's using most of the array is Plex. If I lose this docker after upgrade this would be a major PITA for me. It would have to rescan all my media libraries and that would take days. Question: how does one backup docker settings? Thanks in advance.
  14. Please pardon my lack of knowledge but I access Unraid shares from my Win10 VM using the file share path directly (\\<unraid_hostname>\<share>) instead of mounting as a windows drive. Would there be any benefit to me upgrading to 6.11.1 and using the new virtiofs driver? Thanks in advance
  15. Workaround did the trick. Thanks @binhex
  16. Well this is one for the books. Plex just released a new container build. Installed it. Now container size reports 660MB. I'm suspecting previous version was foobar. Thanks for your help @trurl
  17. Transcoder is set to /transcode which is mapped to /tmp on unraid. From my Plex client:
  18. Plex transcoding is set to /tmp. Not using DVR feature.
  19. Container Size gives different values than "docker system df -v". Looks like Plex is the culprit. I'll have to go digging around in that container. Thanks @trurl
  20. Someone on the reddit unraid group was having the same problem. Looks like the docker warning subsystem cannot distinguish between disk usage in the docker.img file vs. disk usage on the array. Oh well. Warning - Docker image disk utilization high
  21. Received the warning yesterday at 12:36PM if that helps narrow down the search.
  22. Diagnostics attached. mediatower-diagnostics-20220801-1349.zip
  23. Just noticed this warning last night. My image file is 20GB but according to "docker system df -v" I'm only using 2.6GB. The plex docker is using the array on which one disk just reached the 70% threshold. Is this related to the docker warning? Thanks in advance. root@MediaTower:~# ls -l /mnt/user/system/docker/docker.img -rw-rw-rw- 1 nobody users 21474836480 Aug 1 13:11 /mnt/user/system/docker/docker.img root@MediaTower:~# docker system df -v Images space usage: REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS linuxserver/duckdns latest 027cca6024b6 5 days ago 22.31MB 0B 22.31MB 1 linuxserver/nextcloud latest 27992a0e7f58 8 days ago 420.9MB 0B 420.9MB 1 linuxserver/mariadb latest 4ef1197eee5c 9 days ago 288.8MB 0B 288.8MB 1 plexinc/pms-docker latest bd30c8482314 5 weeks ago 656.4MB 0B 656.4MB 1 jlesage/nginx-proxy-manager latest 8b2f4cf2c43f 2 months ago 191.8MB 0B 191.8MB 1 spaceinvaderone/macinabox latest 141be9f2bd41 6 months ago 1.062GB 0B 1.062GB 1 Containers space usage: CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES 7f899bddc97a linuxserver/duckdns "/init" 0 18.1kB 4 days ago Exited (0) 3 days ago duckdns 250cb2242d30 linuxserver/nextcloud "/init" 0 50.8kB 5 days ago Exited (0) 3 days ago nextcloud e512e1a314f7 linuxserver/mariadb "/init" 0 22.7kB 5 days ago Exited (0) 3 days ago mariadb b5aa71488419 plexinc/pms-docker "/init" 0 11.4GB 9 days ago Up 4 hours (healthy) Plex-Media-Server 14497fb6d571 jlesage/nginx-proxy-manager "/init" 0 16.3kB 8 weeks ago Exited (0) 3 days ago NginxProxyManager bea321562dfe spaceinvaderone/macinabox "/bin/sh -c 'bash /M…" 1 0B 6 months ago Created macinabox Local Volumes space usage: VOLUME NAME LINKS SIZE 21e2e215cd3e2a40baa14ec50f2d0b82e8caf434e213ef6856c41a916b62cd08 0 0B 6badc8e4b59899efac563fd45f4a3a27555d69eea5cfb3794d6908de8da41ae7 0 0B 95dba7fdb028efc2858cc810e87b8c2cdbd1bda91e92dd1f04325fba49f8dced 1 0B cbf4d165d4969f4db849d2855c4bd049f4d21f88157ea9a3b34c459d9ecff99a 0 0B Build cache usage: 0B CACHE ID CACHE TYPE SIZE CREATED LAST USED USAGE SHARED root@MediaTower:~#
  24. After upgrading to 6.10.3 I'm getting this popup error (see attached pic). The Intellij window starts fine but if I close it I cannot open it again. I have to restart the docker. Is there a workaround for this or do I have to wait for the next docker update? FYI this happens with the Pycharm docker also. Thanks.