Jump to content

remserwis

Members
  • Posts

    3
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

remserwis's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Has anyone else experienced this behavior....Usually when some database error or something else was causing plex sterver to stop workning I've seen them here in the log. In recent weeks I've noticed every couple of days (sometimes every day) sudden stops. No info, log or anything...just can't connect through client nor webui. Can't pin it to anything I can imagine...everything else works fine...I cleared and checked cache...docker image size etc... Rebooting docker helps instantly but just for another day or so. I have regular setup, Nginx proxy manager and currently without any rsync etc...just local mnt. One workaround though with app folder being forwarded straight from cache to bypass "unraid speed toll". Worked fine for months...maybe recent unraid upgrades changed behavior of direct cache ? I have setup sonarr and radarr same way...they work fine.... Any ideas?
  2. Thank You for taking a time Kaizac. I am sorry for a little confusion...it's because I didn't master or completely understand background mechanics of rclone nor mergefs. I just learn by setting and watching results. So this cache I was talking about...looks like it's this mergefs mount created with accessed folders structure for fast access. I don't care to much of it ( it's faking the system so the cloud files looks offline ) Originally I thought it was content and files downloaded by plex or RR's. Looks like my RR's through sabnzbd download properly to local folder in my data share and it's merged with online files provided by rclone in unionfs folder. LOVE IT. I will check today but my unraid cache should be irrelevant....data folder will keep freshly downloaded files in unraid cache until mover will put them in array but it will work invisible on the share level so shouldn't affect merged unionfs etc. I will set up this mergefs cache mount to keep it in cache all the time (backed to array only) so mover won't touch it and access should be very fast. And finally I mentioned my 350TB library as in last 2 months Google shut down unlimited plan for cloud....Dropbox being only alternative is limiting/shutting it down as we speak....so I know (also from previous replies in this post) there is plenty of data hoarders like me with libraries between 100TB and 1PB for Plex purposes which are looking for a way out. Yes only way is costly $8K-$10K minimum, investment in to drives, SAS cards and better servers.... My hybrid approach suppose to be "one disk at the time" solution for them when you can upgrade your unraid server gradually as you download data from Gdrive. I think google will give some time on notice before deleting all currently read-only accounts with hundreds TB's on them. Thank you also for the script to copy folders. It won't work or at least will be hard to use by folder for me. I have only 6 of them. Hopefully there is other option, maybe moving (not copying) data until for ex 20TB quota (when one disk added) so when executed again month later with another disk added, it will move next batch of files. I manage directories and files in plex for years now...and splitting into many folders while managing it with RR's is pain in the ass. Constantly changing root folders to match RR, Plex and genre ( kids, movies etc) doesn't work well with 15K+ movies and 3K+ TV shows. To make things funnier I intend to encode some old stuff to H265, after having them locally...some of it RR's will redownload in this format, but as library of 4K HDR UHD movies with lossless quality is growing (every one is 60-80GB). I am looking towards higher total space needed. Anyway this post derailed from main topic...I am sorry for that...but as reading past replies to learn I've seen lot of similar interest from others.
  3. Moving from Google Drive to UnRaid - SaltBox Problem: I think I am not the only one with same problem as stated in the previous post. Plex/Emby users with huge libraries got stuck after recent changes. Google put Workspace accounts in read-only mode and surely will delete them at some point. Dropbox was temp solution as they are limiting "unlimited" plan (after everyone started moving there) and will shut it down eventually also. My local UnRaid server can't handle 350TB yet and will require significant investments to do so. I've decided to do hybrid setup for now and start spending money to go local completely. This script is the key for that. Thank you for creating it as before I found it I was hopeless couldn't afford one time investment to move all data. I've had saltbox server (modern version of Cloudbox) so I was able to copy rclone config with SA accounts and it's mounting TeamDrives nicely. Media were split within few TeamDrives which are mounting using "45.Union" option from Rclone and show up basically as one Media folder which is great. (same as SaltBox). Idea: Merge local folder with cloud drive under the name unionfs to keep all Plex/RR's/etc configs when moving Apps. In that case: 1) all new files would be downloaded locally to "mnt/user/data/local/downloads" folder 2) RR's would rename it locally to "mnt/user/data/local/Media" folder and never got uploaded 3) old GDrive files would be mounted as "mnt/user/data/remote/Media" 4) Merged folder would be "mnt/user/data/unionfs/Media" 5) Plex and RR's would use "mnt/user/data/" linked as "/mnt" in Docker settings (this is just to keep folder scheme from SaltBox). Questions: - How to avoid cache mount ? I would love RR's or Plex write directly to "mnt/user/data/local/Media" folder. if I create there a file or directory in command line it works like intended being visible in merged "mnt/user/data/unionfs/Media". But when Plex scanned one library (using "mnt/user/data/unionfs/Media" path) it created metafiles with proper directory structure but in cache mount. - What would be the script/command etc to start moving data from Gdrive to this "mnt/user/data/local/Media" folder which at the end of this long process will have all the Media? If it can be somehow manually controlled by folder or data cap it would be great as I would love to do it when adding one or few disks at the time (budget restricted). So far only thing I've had to change in the script was to delete "$RcloneRemoteName from path in all 3 variables (to have local and Teamdrives root content directly in merged "mnt/user/data/unionfs" folder. RcloneMountLocation="$RcloneMountShare" LocalFilesLocation="$LocalFilesShare" MergerFSMountLocation="$MergerfsMountShare" I hope my thoughts can be useful to someone... gladly can help with the part of the process I was able to figure out (with my limited linux skills) and I am hoping for some insights/help also.
×
×
  • Create New...