Roken

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by Roken

  1. Here you go. Rebooted with array started. tower-diagnostics-20240301-0905.zip
  2. Woke up this morning to Plex being offline, when to restart it via gui but docker failed to start (not sure if there was a power outage. I looks like Cache_B went into read-only mode according to Fix Common Problems and I can't seem to get it out. I've tried following but i get this error: mount: /temp: wrong fs type, bad option, bad superblock on /dev/nvme0n1, missing codepage or helper program, or other error. dmesg(1) may have more information after failed mount system call. I've been able to make a copy of appdata, some other folder and docker.img, but still stuck on read only. Any help would be appreciated. tower-diagnostics-20240229-1911.zip
  3. This started happening when Google changed their usage policy, but it happens randomly. Sometimes it happens within 24 hours of restarting sometimes a week. I'm unsure what the cause could be as I can't find any issues in the logs or in hardware. Here's the diagnostics. Any help will be appreciated. tower-diagnostics-20231012-0833.zip
  4. I had this issue. Its DNS failures. If you have PiHole installed try adding some more DNS servers under settings. I have Cloudflare & Google.
  5. Go into the dashboard of the miner, hit the gear. There is the config of the miner which you can change on the fly.
  6. Any one have an issue where after transcoding is done the fans on the video card don't stop running? I've noticed unless I restart the Plex container they just continously run. Any idea how to solve this issue besides restarting the container? Nvm, didn't see the above post
  7. I'm having issues regarding Nextcloud and LE. When I go to mydomain.tld/nextcloud after setting everything up (configuring the LE proxy and editting nextcloud config), all I see is the picture below. Not sure why I don't get any login boxes. Any ideas?
  8. message: 'request to http://192.168.1.73:8181/plog/api/v2?apikey=0501b2695dcd44a6ba6df9fbe9b587c1&cmd=get_activity&out_type=json failed, reason: connect EHOSTUNREACH 192.168.1.73:8181', ~Tautulli Webhhok Startup~ Connection refused [Attempt #1], retrying in 30 seconds... I keep getting these errors. I've tried to directly accessing the url and it works fine from Chrome. Any ideas?
  9. I have a 250gb SSD cache drive that is capable of saturating my bandwidth when downloading. I'm currently using the mount location of /mnt/user/... for NZBGet, Sonarr etc, but when downloading because /mnt/user/ is located on my spinners it's makes it cap at around 16 MBps (I have gigabit). Is it advisable to switch from /mnt/user to /mnt/cache on a 250gb SSD to speed up downloading? Should I just swap /downloads <-> /mnt/user/mount_mergefs/google_vfs/downloads on the 3 applications to: /download <-> /mnt/cache/mount_mergefs/google_vfs/downloads or would I need to change all of the folders (ie: data location) to cache as well?
  10. Can you post a screenshot, I believe I'm over thinking this or just simply confused.
  11. Ok so I don't need to download/move files to the local folder then? Just wondering how it know what's new and what to upload? Also the upload script points to /mnt/user/local/google_vfs, but that directory is always empty. So for example so I can better understand this. Nzbget: /downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads Sonarr: /tv <-> /mnt/user/mount_unionfs/google_vfs/tv /downloads <-> /mnt/user/downloads
  12. I'm a bit confused on how to map Sonarr et al with the updated scripts. This is what I have for Sonarr, but the uploader doesn't move the files at all (it tries to delete them instead) /config <-> /mnt/user/appdata/sonarr /dev/rtc <- /dev/rtc /tv <-> /mnt/cache/local/google_vfs/tv /downloads <-> /mnt/cache/local/google_vfs/downloads/ I have the local mergerfs folder on my cache drive so I can saturate my line as it's an SSD and capable of using my full gigabit. Am I doing something wrong here? Seems like rclone is excluding the tv folder in /mnt/cache/local.
  13. Hmm seems like rclone gui keeps dying on me. When I point to the addr in a web browser I get "404 page not found." It was working fine for about 4 days and now it poop'd itself. *resolved, need to restart rclone rcd prior to the upload script, so I altered it to start on array and then kill the process when mounting.
  14. Can you share your upload script that rotates the account in use? Maybe name it rclone_upload_rotate? Is there any speed difference when streaming from Tdrive or Gdrive? I'm wondering if I just wait until it's all uploaded then manually move it all over to GDrive since I already started moving to tdrive. Also you may want to add a setup on github about creating a syslink from a users current content to m_u for clarity. *edit. Looking inside of /mount_unionfs/ I don't see any files in any folders, but I did some digging and I see the files in /mount_rclone (which are on Team drive), is something messed up? Content is starting to drop from Plex as I've pointed all it's folders to m_u. I've changed the mount script to rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off tdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &
  15. @DZMM I have a couple of questions that I can't seem to find an answer for. 1) For the media I currently have, where do I move it to: A)/mnt/user/mount_unionfs/google_vfs/contentfolder (ie: movies) or B)/mnt/user/rclone_upload/contentfolder Without knowing I created a syslink of one of my content folders to /mnt/user/rclone_upload, as it wasn't clear on github what to do exactly with current media, and wanted to start moving things off the server onto the cloud. What should I do? 2) I would like to use Team Drive instead of Gdrive, and have made an edit in rclone_upload script /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: to /mnt/user/rclone_upload/google_vfs/ tdrive_media_vfs: Is this correct and is it advisable to use Team Drives over GDrive? 3) For Sonarr for example the following: /tv <-> /mnt/user/mount_unionfs/google_vfs/tv Which when after NZBGet runs the nzbtoMedia script to trigger Sonarr, it will move the files to the above location, and then rclone_upload will move them to the cloud?