Spatial Disorder

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by Spatial Disorder

  1. Also seeing this error...rolled back to `3.7.0-1-01` and all is well...
  2. Same here...pulled down the new test update and acquired port right away on the first attempt. Everything is looking good so far.
  3. My external ip is showing accurately in the bottom right. I didn't pay attention when I first connected...so I'm not sure if it refreshes at some interval or not?
  4. Setup next-gen connected to Toronto without issue, though it did take about 40 minutes before finally getting a port. Speeds seem to be as good or possibly better, but I haven't tested much to confirm. As always, thanks @binhex for the top-notch support and keeping this updated!
  5. Wow...0.5PB...that's pretty impressive. Any concerns with monthly bandwidth utilization? No issues from your ISP? I've also been using duplicati for the last few years, been very happy with it overall. Do you do anything different with your offsite mount to ensure you could recover in the event of...say an accidental deletion?
  6. Thanks, that makes sense. The more I think about it the more I'm leaning toward going all in on this in order to simplify everything. Right now I have a mix of data local and in gdrive. I'm with you on being lazy...I work in IT and the older I get the less I want to mess with certain aspects of it...just want reliability. I really only have one share that would even be a concern...and now that I think about it...it should probably live in an encrypted vault...
  7. First @DZMM , thanks for sharing this and thank to everyone that has contributed to make it better. I've been using for a few months now with mostly no issues (had that odd orphaned image issue a while back that a few of us had with mergerfs). That was really the only hiccup. I'm really considering moving everything offsite... As for hardlinks, this is timely, as I've finally decided to get around to making hardlinks and get seeding to work properly. When i originally setup things I had: /mnt/user/media <with sub directories for tv/movies/music/audiobooks /mnt/user/downloads <with sub directories for complete/incomplete Your script came along and I then added: /mnt/user/gdrive_local /mnt/user/gdrive_mergerfs /mnt/user/gdrive_rclone I know just mapping all containers to /mnt/user would solve this...but I'm a little apprehensive about giving all the necessary applications read/write to the entire array. I don't have any of this data going to cache...so is there anything stopping me, or a good reason not to, stuff everything into /mnt/user/media and then map everything to that?
  8. @DZMM and @teh0wner Thanks for the help, I'm back up and running and have moved to the latest working scripts with no issues the last 24 hours. I'm fairly certain the new scripts were failing due to the latest mergerfs not getting pulled. Just for good measure I: Deleted all shares (local, rclone mount, mergerfs mount) Ran docker rmi [imageID] to get rid of the old mergerfs image After some reading up on rclone vs rclone-beta, I reverted back to rclone (I don't think this was the issue, but for this purpose would rather be on a stable release and I see nothing that needs the new stuff in the beta release). I'm sure it's fine either way. Pulled latest github scripts and modified variables for my setup Clean reboot for good measure Thanks for all the work putting this together 🤘
  9. @DZMM I blew away all shares and appdata, rebooted, then manually ran mkdir -p /mnt/user/appdata/other/rclone/mergerfs docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin I then ran the latest gdrive_rclone_mount script and it runs successfully (shown below). As soon as I even try to list contents of mount_mergerfs the terminal freezes and it pegs a core on the CPU and never responds. I'm not sure what to try... Script location: /tmp/user.scripts/tmpScripts/gdrive_mount/script Note that closing this window will abort the execution of this script 01.03.2020 12:50:26 INFO: *** Starting mount of remote gdrive_media_vfs 01.03.2020 12:50:26 INFO: Checking if this script is already running. 01.03.2020 12:50:26 INFO: Script not running - proceeding. 01.03.2020 12:50:26 INFO: Mount not running. Will now mount gdrive_media_vfs remote. 01.03.2020 12:50:26 INFO: Recreating mountcheck file for gdrive_media_vfs remote. 2020/03/01 12:50:26 DEBUG : rclone: Version "v1.51.0-076-g38a4d50e-beta" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "copy" "mountcheck" "gdrive_media_vfs:" "-vv" "--no-traverse"] 2020/03/01 12:50:26 DEBUG : Using config file from "/boot/config/plugins/rclone-beta/.rclone.conf" 2020/03/01 12:50:27 DEBUG : mountcheck: Modification times differ by -12h40m10.128729289s: 2020-03-01 12:50:26.567729289 -0500 EST, 2020-03-01 05:10:16.439 +0000 UTC 2020/03/01 12:50:29 INFO : mountcheck: Copied (replaced existing) 2020/03/01 12:50:29 INFO : Transferred: 32 / 32 Bytes, 100%, 20 Bytes/s, ETA 0s Transferred: 1 / 1, 100% Elapsed time: 1.5s 2020/03/01 12:50:29 DEBUG : 7 go routines active 01.03.2020 12:50:29 INFO: *** Creating mount for remote gdrive_media_vfs 01.03.2020 12:50:29 INFO: sleeping for 5 seconds 01.03.2020 12:50:34 INFO: continuing... 01.03.2020 12:50:34 INFO: Successful mount of gdrive_media_vfs mount. 01.03.2020 12:50:34 INFO: Mergerfs already installed, proceeding to create mergerfs mount 01.03.2020 12:50:34 INFO: Creating gdrive_media_vfs mergerfs mount. 01.03.2020 12:50:34 INFO: Checking if gdrive_media_vfs mergerfs mount created. 01.03.2020 12:50:34 INFO: Check successful, gdrive_media_vfs mergerfs mount created. 01.03.2020 12:50:34 INFO: Starting dockers. "docker start" requires at least 1 argument. See 'docker start --help'. Usage: docker start [OPTIONS] CONTAINER [CONTAINER...] Start one or more stopped containers 01.03.2020 12:50:34 INFO: Script complete
  10. I'm also seeing an issue with mergerfs hanging @teh0wner. I was doing some maintenance last night and decided to update to the latest version of this script and could not access the mergerfs mount. It would peg a single core on the CPU and just hang. I reverted back to the older script, that had previously been working for a month+, and seeing the same issue. I blew away all directories (local, mount_rclone, mount_mergerfs, appdata/other) and clean reboot with same issue. Looking at the last few posts I'm wondering if it isn't something that changed with trapexit/mergerfs-static-build
  11. Same issue. Applied @bluemonster's fix and everything looks good now. Good work @bluemonster!
  12. I can appreciate that...especially when trying to decide which OS to go with. I will say, I jumped to Unraid about 2.5 years ago and it has been fantastic and one of the easiest to administer. The few times I've had issues this community has been extremely quick in helping resolve. Hell, I've gotten better support here than I have for enterprise products I pay six figures for (I'm an IT Manager). I think with this particular issue it seems extremely hard to replicate. Plex corrupted on me but I'm also running lidarr, sonarr, radarr, bazarr, tautulli, syncthing, and pihole which all seem to be using sqlite to some extent...and I've no issues with those. Now that I took a closer look, pihole and syncthing are also using /mnt/user, instead of /mnt/cache, and they're fine so far (moving them now thought). So why did Plex corrupt and not the others?
  13. Just came across this post and wanted to add another to the list. Plex has been running solid since my last server rebuild nearly 2.5 years ago. I upgraded to 6.7 a few days after release and a week or two later Plex became corrupt. If I recall it was shortly after an update to the Plex container...so I never suspected it as a potential Unraid issue. Since Plex had been running for so long, and I was kind of wanting to change some of my libraries around, I decided to just do a rebuild instead of restoring from backups. I also run Sonarr/Radarr/Lidarr and no issues....but they are all using /mnt/cache. I would have sworn the same was for Plex, but I just pulled down a backup prior to my rebuild and sure enough, I used /mnt/user. It was the first container I setup and probably didn't know any better at the time. I believe someone also mentioned something about CA Backup / Restore Appdata...I also run weekly backups of all containers...but do not recall if corruption happened after a backup. Even if it did this seems more like a possible correlation, but not the underlying cause. I know this is all anecdotal, but @CHBMB may be on to something with not only switching to /mnt/cache or disk, but also creating a clean install with clean database. So far, I've had no issues with Plex since my clean install almost three weeks ago.
  14. This fix seems to have solved the issue. When VM is idle, CPU usage is back to just a few percent like it was before 1803 update.
  15. @DZMM I think it was this one: https://forum.proxmox.com/threads/high-cpu-load-for-windows-10-guests-when-idle.44531/ I'm testing right now to see if this resolves it for me...
  16. Still haven't found a solution. I've done some more playing around with SeaBios vs OVMF, updating to latest virtio, some Win10 adjustments, and nothing seems to make a difference.
  17. I did some experimenting last night after upgrading unRAID to 6.5.2 and trying a clean install of Windows 10. Still see about 15%-20% CPU utilization...even though task manager within the VM shows nearly idle. What I did: I downloaded a clean Windows 10 ISO, which included the 1803 update. Basically used the default VM template setting with the exception of using SeaBIOS and using virtio 0.1.126 Seeing the same results... I checked and saw virtio stable is now up to 0.1.146, so I blew away the entire vm, reinstalled with 0.1.146 virtio (I have no idea is this could even cause the issue..) and still seeing the same 15%-20% CPU at idle. Doing some googlefu I found a couple folks posting similar issues with KVM on Ubuntu Server...no resolution that I could find, just wanted to share.
  18. I'm also seeing this issue after the Windows 10 April (1803) update. Current VM has been solid for probably close to a year, and I noticed the issue immediately after 1803. Task Manager within VM shows essentially nothing going on...yet unRAID CPU shows around 20%.
  19. I had the same issues as @rbroberts when updating the container a few months back. Everything would be working fine, then break after updating the container. After the first time, I blew everything away, did a clean setup, worked great until another update and it happened again, so I bailed on using it. I was mostly just screwing around with it and wasn't really interested in troubleshooting it. I din't keep any logs, so this is probably useless, other than stating I've also seen this same issue.
  20. Thank you johnnie.black! I would have never figured this one out on my own...and I've learned a little more than I wanted about btrfs than I ever wanted to I had misunderstood what you meant by start small. So, even though it failed at 80, it did balance a significant amount. I went back and was able to quickly increment up to about 70....then worked my way up to a 100% balance with no errors. Now I'm showing: root@Server:~# btrfs fi show /mnt/cache Label: none uuid: 8df4175c-ffe2-44d7-91e2-fbb331319bed Total devices 1 FS bytes used 121.61GiB devid 1 size 232.89GiB used 125.02GiB path /dev/sdc1 Thanks again for all the help!
  21. Well.... root@Server:/mnt/user/james# btrfs balance start -dusage=80 /mnt/cache ERROR: error during balancing '/mnt/cache': No space left on device There may be more info in syslog - try dmesg | tail root@Server:/mnt/user/james# dmesg | tail [27506.319628] BTRFS info (device sdc1): relocating block group 58003030016 flags 1 [27511.777268] BTRFS info (device sdc1): found 25126 extents [27606.230821] BTRFS info (device sdc1): found 25126 extents [27606.418496] BTRFS info (device sdc1): relocating block group 56929288192 flags 1 [27627.136389] BTRFS info (device sdc1): found 30137 extents [27682.014305] BTRFS info (device sdc1): found 30137 extents [27682.216675] BTRFS info (device sdc1): relocating block group 55855546368 flags 1 [27707.130530] BTRFS info (device sdc1): found 30129 extents [27773.906438] BTRFS info (device sdc1): found 30127 extents [27774.372412] BTRFS info (device sdc1): 3 enospc errors during balance Not sure what to do next...do I need to clear more space? That would mean moving off docker data in appdata or domains (Win10 / Xubuntu) vdisks.
  22. I'm confused.... Before I did anything else: root@Server:~# btrfs fi show /mnt/cache Label: none uuid: 8df4175c-ffe2-44d7-91e2-fbb331319bed Total devices 1 FS bytes used 131.82GiB devid 1 size 232.89GiB used 232.89GiB path /dev/sdc1 After looking at /mnt/cache I forgot I have downloads sitting on cache drive...I deleted that (~11GB) I then ran the below command as suggested in the linked post root@Server:/mnt/cache/system# btrfs balance start -dusage=5 /mnt/cache Done, had to relocate 1 out of 236 chunks I then get: root@Server:/mnt/cache/system# btrfs fi show /mnt/cache Label: none uuid: 8df4175c-ffe2-44d7-91e2-fbb331319bed Total devices 1 FS bytes used 120.47GiB devid 1 size 232.89GiB used 232.88GiB path /dev/sdc1 I only have 4 shares on /mnt/cache: root@Server:/mnt/cache# du -sh /mnt/cache/appdata/ 38G /mnt/cache/appdata/ root@Server:/mnt/cache# du -sh /mnt/cache/domains/ 45G /mnt/cache/domains/ root@Server:/mnt/cache# du -sh /mnt/cache/downloads/ 205M /mnt/cache/downloads/ root@Server:/mnt/cache# du -sh /mnt/cache/system/ 26G /mnt/cache/system/ Which should add up to ~110GB used...
  23. Started getting a cache drive full error and docker/VMs stopping/pausing...however the cache disk shows plenty of free space. Server has been extremely stable in the current configuration since about Feb. Though, I did add musicbranz/headphones dockers maybe 4-6 weeks ago. I did a reboot this morning (sorry, I'm from the Windows Server world...when things get squirrely it's time for a reboot) and this changed nothing. I also expanded the docker vdisk from 20GB to 25GB which also didn't help. Cache shouldn't ever get full before mover runs...I don't download/move much data around on average. Diagnostics are attached. server-diagnostics-20171118-1010.zip