Jump to content

koopatroopa8

Members
  • Posts

    6
  • Joined

  • Last visited

koopatroopa8's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I do have the orphaned container issue again. If you'd rather take it to DMs for more back and forth we can. The primary thing that I can see being an issue out of normal usage is that my uptime is 4 days, and I'm running 5 python scripts continuously in it. I can share with you what you think you would need to diagnose/reproduce.
  2. I can restart the container, I had seen something happen before that may have been the same issue, but this is the first time I have diagnosed it as an orphaned vnc session before. I'll let you know if I do have it happen again.
  3. I seem to be running into an issue where rather than connecting to my previous vnc session, noVNC keeps creating a new VNC session when I connect, and I'm unable to connect to my old one. I know for sure my old VNC session is still running, because in it is a python script that I can tell is running based on both top and disk usage (its scraping APIs and saving them). Is there any way to "recover" that first VNC session? Here is my output from a terminal of the multiple vnc processes: [root@761abf1adbf4 /]# ps -ef | grep vnc nobody 85 81 0 Jun30 ? 00:00:05 /usr/sbin/python /usr/sbin/websockify --web /usr/share/webapps/novnc/ 6080 localhost:5900 nobody 89 84 0 Jun30 ? 00:02:55 Xvnc :0 -depth 24 -PasswordFile=/home/nobody/.vnc/passwd -Desktop=PythonUnraid nobody 1576 85 0 07:00 ? 00:00:00 /usr/sbin/python /usr/sbin/websockify --web /usr/share/webapps/novnc/ 6080 localhost:5900 root 1587 1507 0 07:01 pts/1 00:00:00 grep vnc And here is output from my top file showing that my python scripts are still running.
  4. So I moved everything to the array, and it looked like significantly less data, then I moved it back, and now it is only taking up 455GB of data. I don't understand why that would fix it, but apparently that's all that was needed.
  5. No Vdisks that I'm aware of, I haven't ever made any VMs in the system, only 7 pretty standard docker containers. The only "non traditional" thing I think have done is I've loaded ~400 GB of data into my mariadb database.
  6. BLUF: My pool (named cache) is reporting a usage of 702 GB or 653 GiB, but the size of all files in /mnt/cache is only 472GB or 440GiB. I want to know why 200+ GB are used when I don't have that many files. So I have a problem that I have found some reference to previously, but has not been resolved with what I have tried so far. I have found that btrfs from /mnt/cache is reporting 653 GiB, which matches the Unraid GUI at 702GB when converting from GiB to GB. However, when using the compute size of /mnt/cache in either Krusader or qdirstat, I get 440GiB of usage. I can then compare this to the size of my appdata and system shares (the only folders in /mnt/cache), which are 451GB and 22.5GB in the GUI respectively. 440GiB=472GB, so those numbers correspond correctly, meaning that from what I can tell, there is not some other location adding to /mnt/cache. What I want to know is why I have over 200GiB of "used" space on my cache drive that I cannot find/access. I have tried balancing and scrubbing the btrfs filesystem and they did not improve the usage metrics. Worst case, I may have to move all appdata to my array and rebuild my pool and then move back, but I'd like to know the cause to either prevent it from happening again or fix it outright without needing to move everything. I have attached Diagnostics and a few screenshots to show what I am seeing. Any help or additional areas/things to check would be appreciated. btrfs usage.txt koopatower-diagnostics-20240111-2236.zip
×
×
  • Create New...