Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Bolagnaise

  1. @DZMM Is your upgrade post from unionfs to mergefs on page 46 still correct? Just so i can type it out to sanity check myself, to upgrade i need too. 1.Unmount the drive and finish and current uploads. 2.Copy the new mount script and unmount scripts and replace my current mount script, using ctrl f to find and replace mount_mergerfs to mount_unions (does this also apply to the new upload script?, i still have a lot of data left in rclone_upload left to upload, but no transfer currently in progress) 3. run mount script and adjust upload script to only upload at 1AM every day.
  2. Feck, your wanting to make me switch, but im scared im going to ruin something and ill lose 60TB of stuff. I also need to setup teamdrives as well
  3. @DZMM a script enhancement might be to add this, this will kill the upload if the 750GB limit is reached. --drive-stop-on-upload-limit https://forum.rclone.org/t/new-flag-for-google-drive-drive-stop-on-upload-limit-to-stop-at-750-gb-limit/13800
  4. For what its worth, i’m still on unionFS. Its been rock solid stable for me for over 12 months now, and since getting gigabit internet, load times have been instantaneous.
  5. Dude,i had the same issues! I worked it out. The /data mapping got changed somehow, so the torrent couldnt write to the correct mapped location i had set inside QBIT!
  6. I installed this other repo, it works perfectly, no stalled connections
  7. All downloads are stalled. I'm using PIA and connected to Sweden after port forwarding went down on Montreal this week. I have tested via windows using PIA windows application and QBIT works fine behind a PIA VPN connection so its something to do with the docker. Has been working fine for months, now every torrent defaults to stalled.
  8. @CyaOnDaNet new error when running !notifications list DiscordAPIError: Maximum number of guild roles reached (250) at /app/node_modules/discord.js/src/client/rest/RequestHandlers/Sequential.js:85:15 at /app/node_modules/snekfetch/src/index.js:215:21 at processTicksAndRejections (internal/process/task_queues.js:94:5) { name: 'DiscordAPIError', message: 'Maximum number of guild roles reached (250)', path: '/api/v7/guilds/632476253542154240/roles', code: 30005, method: 'POST'
  9. @CyaOnDaNet in your guide you wrote it should be If you want to see if its working, run `!bot logchannel #channel
  10. IT WORKED! For anyone with LetsEncrypt and reverse proxy setup for Sonarr and Tautulli, this is how you configure the settings
  11. yes it does, (well does) so it seems its not the issue, i installed the default tautulli, same issue. Best bet is too nuke everything i guess
  12. No problems, im going to uninstall the linuxserver version and try the Tautulli version.
  13. I have googled the shit out of it, i think it might have something to do with me using a base url for a reverse proxy
  14. same issue still happening. FetchError: invalid json response body at reason: Unexpected token < in JSON at position 0 at /app/node_modules/node-fetch/lib/index.js:272:32 at processTicksAndRejections (internal/process/task_queues.js:94:5) at async tautulliService.getActivity (/app/src/tautulli.js:107:22) at async Job.job (/app/index.js:250:15) { Going to the link in the log i get this https://gyazo.com/5b11cb1597245f506c1839d501eed915
  15. Not turned on, More errors now appearing FetchError: invalid json response body at reason: Unexpected token < in JSON at position 0 at /app/node_modules/node-fetch/lib/index.js:272:32 at processTicksAndRejections (internal/process/task_queues.js:94:5) at async module.exports (/app/src/tautulli.js:214:18) { message: 'invalid json response body at reason: Unexpected token < in JSON at position 0', type: 'invalid-json'
  16. I cannot get it to connect to Tautulli, keeps getting this error. Api access is enabled in tautulli and its the correct api key. message: 'invalid json response body at''TOKENREMOVED''&cmd=get_activity reason: Unexpected token < in JSON at position 0',
  17. I found it, you put me on the right path, it was a shares issue and i had a 4k movies folder left over on disk 2 on my array, which was appearing in the union mount before the script ran.
  18. @DZMM I’m sure the answer is in here somewhere but ill ask anyway, i created a 4K movies folder in unionfs, but i accidentally did it before the mount was active, now everytime i do a server reboot, that folder is always there (completely empty) and i have to manually delete it to get the mount to start. how do i fix this?
  19. I’ve never used SpaceInvaders script, are you creating directories and mounting them using the script? I highly Highly recommend using the scripts created on page 1 by DZMM for mounting G drive, as it will stop you getting API bans from google.
  20. No idea, my rclone version in unraid says: ‘Version 2019.10.13b Fusermount compatibility fix for future unRaid versions
  21. Latest rclone update fixes the fusermount3 issue. Unmount, Upgrade rclone and remove the GO file Symlink line, remount.
  22. Yeah it’s defiantly related to the router, but as I said, I just reduced the BW limit to 3000 and it no longer drops anymore.
  23. @DZMM So I tried running the upload script last night and the mount immediately disconnected and was throwing errors in the mount log saying it couldn’t pull the api key, which made me realise exactly what the original issue was, I had BW set to 9000 in the script, and I use a 5G router to perform the uploads but it only has a 4G uplink speed of around 45mbps. So basically everytime the script ran it would cause my router to crash and the mount would disconnect, as soon as I stopped the upload/rebooted it would work again. maybe a warning to everyone to change the BW limit as a must. @jamesac2 only has a 10Mbps upload so maybe that’s why he’s also getting disconnects.
  24. No worries man, that the way it is, everything works completely fine...until it doesnt. You have done everyone a service so I don’t mind a few late nights troubleshooting, your literally saving me money with this script. anyway, 12 hours uptime now, zero dismounts and I successfully moved my everything onto a brand new unraid build in new hardware. looks like it’s fixed.