CorneliousJD

Members
  • Posts

    655
  • Joined

  • Last visited

  • Days Won

    1

CorneliousJD last won the day on October 28 2020

CorneliousJD had the most liked content!

3 Followers

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

CorneliousJD's Achievements

Enthusiast

Enthusiast (6/14)

110

Reputation

  1. Sure, just pushed an update to the template, should be live in a few hours for new installs from CA. If you already have it installed you can just add the path you have from your screenshot for the same effect.
  2. Where does the JSON file end up? There's no appdata path configured in the template so I'm not sure where I'm supposed to be editing these files at. Shouldn't they be stored in appdata to survive updates? I have many trackers I'd like to use this with to auto-login with to keep accounts active.
  3. i don't know unfortuantely, you'd want to look up documentation for this specific platform, as this is really for unraid specific support for the template not working.
  4. A force update certainly shouldn't break anything - have you looked at logs - what do the logs of the main container say when it ends up stopping? I think anyone here (myself included) will need some more info before we can really help. Also, not that I suggest this yet, but if all else fails you could blow away the entire stack of 3 containers and start fresh and restore from a snapshot backup within TA itself?
  5. Which container, as there are 3. Did you replace the repo for the redisjson container? Which of the 3 said it didn't have an update available? Forcing an update shouldn't break anything.
  6. I honestly didn't look at the redis stack logs, I just let it do it's thing. The path.repo variable was already there for me in the ES container hidden behind the "show more" part. So no changes there for me, all I did was drop in the replacement redis container and let it ride. I have 128GB of RAM on my system so I doubt low memory would ever be an issue for me, but perhaps the maintainers of the template, or TA devs can shed more light on the error you're seeing.
  7. I just saw the following update posted yesterday to the TubeArchivist Discord cahnnel. According to this we just change the redislabs/rejason to redis/redis-stack-server and it should be a drop-in replacement. On startup I can confirm I see the following now with the change. So to confirm this is now my current setup.
  8. Looks like this is just on your end - i just force-updated all my instances of uptime-kuma and have no issues. you should be taking backups of all your appdata at regular invervals as well -- if you are you could restore from that?
  9. I can't say for sure w/out recreating this isseue but sounds like a permission issue, try to chmod 777 to the appdata and storage data path for now to see if it can scan?
  10. So glad you're seeing the same speed improvements, I have been running and using Nextcloud this way for months now and it is so much better! 1. Looks like others on github have had your same issue w/ the disable rewrite IP issue: https://github.com/nextcloud/docker/issues/1494 For what it's wroth I am NOT running into that issue though. I'm not sure if you need it though, iirc it's for logging? Going off memory here tho. Here's my whole setup (Below) 2. the image providers are built-in -- you do need to install ffmpeg for video previews, but i have that on a 7AM recurring cron job to install it into the container (so it survives container rebuilds) but all the others should be baked in by default already.
  11. I see the ntfy container added today - but one already exists. Same dockerhub repo too of binwiederhier/ntfy:latest I believe Squid's normal stance here is whoever had it first should keep their template, unless its unmaintained/depreicated. EDIT: Note that I don't want to speak for Squid at all, just speaking from personal experience. I only say something because I saw that ntfy was added yet I already had it installed, made me look at if there was a different container or what.
  12. I recently installed TA with latest on everything and from what I see everything is working
  13. This isn't an unRAID specific issue. You'll want to file an issue on developer's github
  14. update incase anyone has similar issues later... I'm pre-emptively calling this fixed because it's been going for over 2 hours now without locking up -- I certainly never got that yesterday when doing all my testing. If anyone finds this in the future and has the same issue on unraid, then I added this path to resolve it, which prevents the download files from being put on your btrfs cache drive(s) and having to move them to the array -- this allows it to download straight to the array and then it will just move it to the other path when finished -- still on the same array and filesystem.
  15. Rgr that, I agree it's a VERY strange issue. I download tons of data via Transmission and qBT as well and have never had issue. Only thing TA is doing differently is it downloads to my cache drive first and then it gets procsesed/moved to the array. I have NOT tried changing that config yet to put the /appdata/tubearchivist/downloads folder onto the array directly, but I have other friends using this to have over 10k videos as well already and have not run into this issue. Something very odd is going on, just can't pin down what! I've copied the majority of my two posts into a GitHub issue Here's the link if anyone wants to chime in w/ anything https://github.com/tubearchivist/tubearchivist/issues/402