Spatial Disorder

Members
  • Posts

    31
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Spatial Disorder's Achievements

Noob

Noob (1/14)

5

Reputation

  1. Also seeing this error...rolled back to `3.7.0-1-01` and all is well...
  2. Same here...pulled down the new test update and acquired port right away on the first attempt. Everything is looking good so far.
  3. My external ip is showing accurately in the bottom right. I didn't pay attention when I first connected...so I'm not sure if it refreshes at some interval or not?
  4. Setup next-gen connected to Toronto without issue, though it did take about 40 minutes before finally getting a port. Speeds seem to be as good or possibly better, but I haven't tested much to confirm. As always, thanks @binhex for the top-notch support and keeping this updated!
  5. Wow...0.5PB...that's pretty impressive. Any concerns with monthly bandwidth utilization? No issues from your ISP? I've also been using duplicati for the last few years, been very happy with it overall. Do you do anything different with your offsite mount to ensure you could recover in the event of...say an accidental deletion?
  6. Thanks, that makes sense. The more I think about it the more I'm leaning toward going all in on this in order to simplify everything. Right now I have a mix of data local and in gdrive. I'm with you on being lazy...I work in IT and the older I get the less I want to mess with certain aspects of it...just want reliability. I really only have one share that would even be a concern...and now that I think about it...it should probably live in an encrypted vault...
  7. First @DZMM , thanks for sharing this and thank to everyone that has contributed to make it better. I've been using for a few months now with mostly no issues (had that odd orphaned image issue a while back that a few of us had with mergerfs). That was really the only hiccup. I'm really considering moving everything offsite... As for hardlinks, this is timely, as I've finally decided to get around to making hardlinks and get seeding to work properly. When i originally setup things I had: /mnt/user/media <with sub directories for tv/movies/music/audiobooks /mnt/user/downloads <with sub directories for complete/incomplete Your script came along and I then added: /mnt/user/gdrive_local /mnt/user/gdrive_mergerfs /mnt/user/gdrive_rclone I know just mapping all containers to /mnt/user would solve this...but I'm a little apprehensive about giving all the necessary applications read/write to the entire array. I don't have any of this data going to cache...so is there anything stopping me, or a good reason not to, stuff everything into /mnt/user/media and then map everything to that?
  8. @DZMM and @teh0wner Thanks for the help, I'm back up and running and have moved to the latest working scripts with no issues the last 24 hours. I'm fairly certain the new scripts were failing due to the latest mergerfs not getting pulled. Just for good measure I: Deleted all shares (local, rclone mount, mergerfs mount) Ran docker rmi [imageID] to get rid of the old mergerfs image After some reading up on rclone vs rclone-beta, I reverted back to rclone (I don't think this was the issue, but for this purpose would rather be on a stable release and I see nothing that needs the new stuff in the beta release). I'm sure it's fine either way. Pulled latest github scripts and modified variables for my setup Clean reboot for good measure Thanks for all the work putting this together 🤘
  9. @DZMM I blew away all shares and appdata, rebooted, then manually ran mkdir -p /mnt/user/appdata/other/rclone/mergerfs docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin I then ran the latest gdrive_rclone_mount script and it runs successfully (shown below). As soon as I even try to list contents of mount_mergerfs the terminal freezes and it pegs a core on the CPU and never responds. I'm not sure what to try... Script location: /tmp/user.scripts/tmpScripts/gdrive_mount/script Note that closing this window will abort the execution of this script 01.03.2020 12:50:26 INFO: *** Starting mount of remote gdrive_media_vfs 01.03.2020 12:50:26 INFO: Checking if this script is already running. 01.03.2020 12:50:26 INFO: Script not running - proceeding. 01.03.2020 12:50:26 INFO: Mount not running. Will now mount gdrive_media_vfs remote. 01.03.2020 12:50:26 INFO: Recreating mountcheck file for gdrive_media_vfs remote. 2020/03/01 12:50:26 DEBUG : rclone: Version "v1.51.0-076-g38a4d50e-beta" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "copy" "mountcheck" "gdrive_media_vfs:" "-vv" "--no-traverse"] 2020/03/01 12:50:26 DEBUG : Using config file from "/boot/config/plugins/rclone-beta/.rclone.conf" 2020/03/01 12:50:27 DEBUG : mountcheck: Modification times differ by -12h40m10.128729289s: 2020-03-01 12:50:26.567729289 -0500 EST, 2020-03-01 05:10:16.439 +0000 UTC 2020/03/01 12:50:29 INFO : mountcheck: Copied (replaced existing) 2020/03/01 12:50:29 INFO : Transferred: 32 / 32 Bytes, 100%, 20 Bytes/s, ETA 0s Transferred: 1 / 1, 100% Elapsed time: 1.5s 2020/03/01 12:50:29 DEBUG : 7 go routines active 01.03.2020 12:50:29 INFO: *** Creating mount for remote gdrive_media_vfs 01.03.2020 12:50:29 INFO: sleeping for 5 seconds 01.03.2020 12:50:34 INFO: continuing... 01.03.2020 12:50:34 INFO: Successful mount of gdrive_media_vfs mount. 01.03.2020 12:50:34 INFO: Mergerfs already installed, proceeding to create mergerfs mount 01.03.2020 12:50:34 INFO: Creating gdrive_media_vfs mergerfs mount. 01.03.2020 12:50:34 INFO: Checking if gdrive_media_vfs mergerfs mount created. 01.03.2020 12:50:34 INFO: Check successful, gdrive_media_vfs mergerfs mount created. 01.03.2020 12:50:34 INFO: Starting dockers. "docker start" requires at least 1 argument. See 'docker start --help'. Usage: docker start [OPTIONS] CONTAINER [CONTAINER...] Start one or more stopped containers 01.03.2020 12:50:34 INFO: Script complete
  10. I'm also seeing an issue with mergerfs hanging @teh0wner. I was doing some maintenance last night and decided to update to the latest version of this script and could not access the mergerfs mount. It would peg a single core on the CPU and just hang. I reverted back to the older script, that had previously been working for a month+, and seeing the same issue. I blew away all directories (local, mount_rclone, mount_mergerfs, appdata/other) and clean reboot with same issue. Looking at the last few posts I'm wondering if it isn't something that changed with trapexit/mergerfs-static-build
  11. Same issue. Applied @bluemonster's fix and everything looks good now. Good work @bluemonster!
  12. I can appreciate that...especially when trying to decide which OS to go with. I will say, I jumped to Unraid about 2.5 years ago and it has been fantastic and one of the easiest to administer. The few times I've had issues this community has been extremely quick in helping resolve. Hell, I've gotten better support here than I have for enterprise products I pay six figures for (I'm an IT Manager). I think with this particular issue it seems extremely hard to replicate. Plex corrupted on me but I'm also running lidarr, sonarr, radarr, bazarr, tautulli, syncthing, and pihole which all seem to be using sqlite to some extent...and I've no issues with those. Now that I took a closer look, pihole and syncthing are also using /mnt/user, instead of /mnt/cache, and they're fine so far (moving them now thought). So why did Plex corrupt and not the others?
  13. Just came across this post and wanted to add another to the list. Plex has been running solid since my last server rebuild nearly 2.5 years ago. I upgraded to 6.7 a few days after release and a week or two later Plex became corrupt. If I recall it was shortly after an update to the Plex container...so I never suspected it as a potential Unraid issue. Since Plex had been running for so long, and I was kind of wanting to change some of my libraries around, I decided to just do a rebuild instead of restoring from backups. I also run Sonarr/Radarr/Lidarr and no issues....but they are all using /mnt/cache. I would have sworn the same was for Plex, but I just pulled down a backup prior to my rebuild and sure enough, I used /mnt/user. It was the first container I setup and probably didn't know any better at the time. I believe someone also mentioned something about CA Backup / Restore Appdata...I also run weekly backups of all containers...but do not recall if corruption happened after a backup. Even if it did this seems more like a possible correlation, but not the underlying cause. I know this is all anecdotal, but @CHBMB may be on to something with not only switching to /mnt/cache or disk, but also creating a clean install with clean database. So far, I've had no issues with Plex since my clean install almost three weeks ago.
  14. This fix seems to have solved the issue. When VM is idle, CPU usage is back to just a few percent like it was before 1803 update.