Jump to content

Spatial Disorder

Members
  • Content Count

    21
  • Joined

  • Last visited

Community Reputation

3 Neutral

About Spatial Disorder

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Same issue. Applied @bluemonster's fix and everything looks good now. Good work @bluemonster!
  2. I can appreciate that...especially when trying to decide which OS to go with. I will say, I jumped to Unraid about 2.5 years ago and it has been fantastic and one of the easiest to administer. The few times I've had issues this community has been extremely quick in helping resolve. Hell, I've gotten better support here than I have for enterprise products I pay six figures for (I'm an IT Manager). I think with this particular issue it seems extremely hard to replicate. Plex corrupted on me but I'm also running lidarr, sonarr, radarr, bazarr, tautulli, syncthing, and pihole which all seem to be using sqlite to some extent...and I've no issues with those. Now that I took a closer look, pihole and syncthing are also using /mnt/user, instead of /mnt/cache, and they're fine so far (moving them now thought). So why did Plex corrupt and not the others?
  3. Just came across this post and wanted to add another to the list. Plex has been running solid since my last server rebuild nearly 2.5 years ago. I upgraded to 6.7 a few days after release and a week or two later Plex became corrupt. If I recall it was shortly after an update to the Plex container...so I never suspected it as a potential Unraid issue. Since Plex had been running for so long, and I was kind of wanting to change some of my libraries around, I decided to just do a rebuild instead of restoring from backups. I also run Sonarr/Radarr/Lidarr and no issues....but they are all using /mnt/cache. I would have sworn the same was for Plex, but I just pulled down a backup prior to my rebuild and sure enough, I used /mnt/user. It was the first container I setup and probably didn't know any better at the time. I believe someone also mentioned something about CA Backup / Restore Appdata...I also run weekly backups of all containers...but do not recall if corruption happened after a backup. Even if it did this seems more like a possible correlation, but not the underlying cause. I know this is all anecdotal, but @CHBMB may be on to something with not only switching to /mnt/cache or disk, but also creating a clean install with clean database. So far, I've had no issues with Plex since my clean install almost three weeks ago.
  4. This fix seems to have solved the issue. When VM is idle, CPU usage is back to just a few percent like it was before 1803 update.
  5. @DZMM I think it was this one: https://forum.proxmox.com/threads/high-cpu-load-for-windows-10-guests-when-idle.44531/ I'm testing right now to see if this resolves it for me...
  6. Still haven't found a solution. I've done some more playing around with SeaBios vs OVMF, updating to latest virtio, some Win10 adjustments, and nothing seems to make a difference.
  7. I did some experimenting last night after upgrading unRAID to 6.5.2 and trying a clean install of Windows 10. Still see about 15%-20% CPU utilization...even though task manager within the VM shows nearly idle. What I did: I downloaded a clean Windows 10 ISO, which included the 1803 update. Basically used the default VM template setting with the exception of using SeaBIOS and using virtio 0.1.126 Seeing the same results... I checked and saw virtio stable is now up to 0.1.146, so I blew away the entire vm, reinstalled with 0.1.146 virtio (I have no idea is this could even cause the issue..) and still seeing the same 15%-20% CPU at idle. Doing some googlefu I found a couple folks posting similar issues with KVM on Ubuntu Server...no resolution that I could find, just wanted to share.
  8. I'm also seeing this issue after the Windows 10 April (1803) update. Current VM has been solid for probably close to a year, and I noticed the issue immediately after 1803. Task Manager within VM shows essentially nothing going on...yet unRAID CPU shows around 20%.
  9. I had the same issues as @rbroberts when updating the container a few months back. Everything would be working fine, then break after updating the container. After the first time, I blew everything away, did a clean setup, worked great until another update and it happened again, so I bailed on using it. I was mostly just screwing around with it and wasn't really interested in troubleshooting it. I din't keep any logs, so this is probably useless, other than stating I've also seen this same issue.
  10. Thank you johnnie.black! I would have never figured this one out on my own...and I've learned a little more than I wanted about btrfs than I ever wanted to I had misunderstood what you meant by start small. So, even though it failed at 80, it did balance a significant amount. I went back and was able to quickly increment up to about 70....then worked my way up to a 100% balance with no errors. Now I'm showing: root@Server:~# btrfs fi show /mnt/cache Label: none uuid: 8df4175c-ffe2-44d7-91e2-fbb331319bed Total devices 1 FS bytes used 121.61GiB devid 1 size 232.89GiB used 125.02GiB path /dev/sdc1 Thanks again for all the help!
  11. Well.... root@Server:/mnt/user/james# btrfs balance start -dusage=80 /mnt/cache ERROR: error during balancing '/mnt/cache': No space left on device There may be more info in syslog - try dmesg | tail root@Server:/mnt/user/james# dmesg | tail [27506.319628] BTRFS info (device sdc1): relocating block group 58003030016 flags 1 [27511.777268] BTRFS info (device sdc1): found 25126 extents [27606.230821] BTRFS info (device sdc1): found 25126 extents [27606.418496] BTRFS info (device sdc1): relocating block group 56929288192 flags 1 [27627.136389] BTRFS info (device sdc1): found 30137 extents [27682.014305] BTRFS info (device sdc1): found 30137 extents [27682.216675] BTRFS info (device sdc1): relocating block group 55855546368 flags 1 [27707.130530] BTRFS info (device sdc1): found 30129 extents [27773.906438] BTRFS info (device sdc1): found 30127 extents [27774.372412] BTRFS info (device sdc1): 3 enospc errors during balance Not sure what to do next...do I need to clear more space? That would mean moving off docker data in appdata or domains (Win10 / Xubuntu) vdisks.
  12. I'm confused.... Before I did anything else: root@Server:~# btrfs fi show /mnt/cache Label: none uuid: 8df4175c-ffe2-44d7-91e2-fbb331319bed Total devices 1 FS bytes used 131.82GiB devid 1 size 232.89GiB used 232.89GiB path /dev/sdc1 After looking at /mnt/cache I forgot I have downloads sitting on cache drive...I deleted that (~11GB) I then ran the below command as suggested in the linked post root@Server:/mnt/cache/system# btrfs balance start -dusage=5 /mnt/cache Done, had to relocate 1 out of 236 chunks I then get: root@Server:/mnt/cache/system# btrfs fi show /mnt/cache Label: none uuid: 8df4175c-ffe2-44d7-91e2-fbb331319bed Total devices 1 FS bytes used 120.47GiB devid 1 size 232.89GiB used 232.88GiB path /dev/sdc1 I only have 4 shares on /mnt/cache: root@Server:/mnt/cache# du -sh /mnt/cache/appdata/ 38G /mnt/cache/appdata/ root@Server:/mnt/cache# du -sh /mnt/cache/domains/ 45G /mnt/cache/domains/ root@Server:/mnt/cache# du -sh /mnt/cache/downloads/ 205M /mnt/cache/downloads/ root@Server:/mnt/cache# du -sh /mnt/cache/system/ 26G /mnt/cache/system/ Which should add up to ~110GB used...
  13. Started getting a cache drive full error and docker/VMs stopping/pausing...however the cache disk shows plenty of free space. Server has been extremely stable in the current configuration since about Feb. Though, I did add musicbranz/headphones dockers maybe 4-6 weeks ago. I did a reboot this morning (sorry, I'm from the Windows Server world...when things get squirrely it's time for a reboot) and this changed nothing. I also expanded the docker vdisk from 20GB to 25GB which also didn't help. Cache shouldn't ever get full before mover runs...I don't download/move much data around on average. Diagnostics are attached. server-diagnostics-20171118-1010.zip