CorneliousJD

Members
  • Posts

    691
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by CorneliousJD

  1. In Settings/Notifications, you can have settings all unchecked and it will still send them. This is happening for Docker containers at the very least and is the one I've tested and seen on multiple servers now. One server was sending it to Telegram, the other started e-mailing me. I've simply changed it to not check for updates now on this page instead.
  2. Fair enough, the old plugin got around it somehow it seems, not sure how, or if it just took longer to load the page until it could display properly? Not sure and couldn't fathom a guess, but just wanted to check. Understandable, I think a list-only w/ no "running" or "stopped" below it would still be great, you could fit more containers per folder in the preview section that way if you stack them like the old plugin used to, but I actually think I like the icon look the more I'm using it now today. Once again, thanks for picking this up and making it compatible with 6.12 and hopefully beyond! Much appreciated.
  3. Thanks - I did end up trying this but just ended up recreating via webUI, as organizing the orders took the longest here. My final result below for anyone who was interested in a before/after. I didn't want too much clutter so I opted for icons only since the label only still said if it was started or not, which wasn't necessary for me. I would like to see an option for label only where it just lists the text of the container that's clickable like the old docker folders plugin, but the icon-only looks sharp too! One thing I did notice is how going to the Docker page or the Dashboard page first has to load ALL the containers first and then finally it loads the plugin and the folders. This never used to be the case w/ the old plugin, is there any plans to increase performance so the folders show up right away instead of waiting for them? Thanks for making this plugin, it's great, and I'm glad to see it actively being developed!
  4. Is there a way to convert from the DockerFolder plugin to this? I'd of course like to swap everyting over without re-doing everything. My gut tells me it's not possible but I'd like to check first
  5. I agree, I split an ultrawide into 3 zones so my browser windows are less than 1920 wide, and I had 3 columns before and it looked and worked GREAT, and I'd really, REALLY like that same functionality back.
  6. I see the same thing after my weekly CA appdata backup. I've been getting around this by simply using a userscript to run at 7:30AM daily (30 7 * * *) #!/bin/bash docker start TubeArchivist
  7. I'm mobile but check the apps tab and type on "docker patch" There's a patch to fix that orange "not available" update. It's some issue with the way unRAID checks for docker updates. That "not available" means it can't check if there's an update, not that one *isn't* available. You probably have a lot of updates you are behind on, but install that patch, check for updates and let them ride.
  8. Sorry, but what? The container updates fine. It's up to date The "docker version" is the official version. It sounds to me that perhaps you haven't applied the docker update patch that came out at the server level and you're not seeing "update unavailable" instead? Ps. You could still hit force update.
  9. I am quite busy and can't say that this is an app I use regularly haha. But feel free to do a PR on GitHub against the template and add it and I can merge it so others benfit from the update. Alternately you can simply add it to your own instance of the container? No need to wait for me, just add your own mapping on your own isntall.
  10. I was running fine but would randomly see all sorts of network issues on my server after that. not able to update containers, not able to reach the internet from some, etc. seems the way it networks those together w/ ipvlan didn't play nice with my unifi router.
  11. For what it's worth because I kept having nothing but problems, I eventually took one NIC (my server had 4) and made it just for a docker network, no vlans or anything but ALL dockers use that one NIC interface now, and it's all on macvlan and I have not had any problems since.
  12. Sure, just pushed an update to the template, should be live in a few hours for new installs from CA. If you already have it installed you can just add the path you have from your screenshot for the same effect.
  13. Where does the JSON file end up? There's no appdata path configured in the template so I'm not sure where I'm supposed to be editing these files at. Shouldn't they be stored in appdata to survive updates? I have many trackers I'd like to use this with to auto-login with to keep accounts active.
  14. i don't know unfortuantely, you'd want to look up documentation for this specific platform, as this is really for unraid specific support for the template not working.
  15. A force update certainly shouldn't break anything - have you looked at logs - what do the logs of the main container say when it ends up stopping? I think anyone here (myself included) will need some more info before we can really help. Also, not that I suggest this yet, but if all else fails you could blow away the entire stack of 3 containers and start fresh and restore from a snapshot backup within TA itself?
  16. Which container, as there are 3. Did you replace the repo for the redisjson container? Which of the 3 said it didn't have an update available? Forcing an update shouldn't break anything.
  17. I honestly didn't look at the redis stack logs, I just let it do it's thing. The path.repo variable was already there for me in the ES container hidden behind the "show more" part. So no changes there for me, all I did was drop in the replacement redis container and let it ride. I have 128GB of RAM on my system so I doubt low memory would ever be an issue for me, but perhaps the maintainers of the template, or TA devs can shed more light on the error you're seeing.
  18. I just saw the following update posted yesterday to the TubeArchivist Discord cahnnel. According to this we just change the redislabs/rejason to redis/redis-stack-server and it should be a drop-in replacement. On startup I can confirm I see the following now with the change. So to confirm this is now my current setup.
  19. Looks like this is just on your end - i just force-updated all my instances of uptime-kuma and have no issues. you should be taking backups of all your appdata at regular invervals as well -- if you are you could restore from that?
  20. I can't say for sure w/out recreating this isseue but sounds like a permission issue, try to chmod 777 to the appdata and storage data path for now to see if it can scan?
  21. So glad you're seeing the same speed improvements, I have been running and using Nextcloud this way for months now and it is so much better! 1. Looks like others on github have had your same issue w/ the disable rewrite IP issue: https://github.com/nextcloud/docker/issues/1494 For what it's wroth I am NOT running into that issue though. I'm not sure if you need it though, iirc it's for logging? Going off memory here tho. Here's my whole setup (Below) 2. the image providers are built-in -- you do need to install ffmpeg for video previews, but i have that on a 7AM recurring cron job to install it into the container (so it survives container rebuilds) but all the others should be baked in by default already.
  22. I see the ntfy container added today - but one already exists. Same dockerhub repo too of binwiederhier/ntfy:latest I believe Squid's normal stance here is whoever had it first should keep their template, unless its unmaintained/depreicated. EDIT: Note that I don't want to speak for Squid at all, just speaking from personal experience. I only say something because I saw that ntfy was added yet I already had it installed, made me look at if there was a different container or what.
  23. I recently installed TA with latest on everything and from what I see everything is working
  24. This isn't an unRAID specific issue. You'll want to file an issue on developer's github
  25. update incase anyone has similar issues later... I'm pre-emptively calling this fixed because it's been going for over 2 hours now without locking up -- I certainly never got that yesterday when doing all my testing. If anyone finds this in the future and has the same issue on unraid, then I added this path to resolve it, which prevents the download files from being put on your btrfs cache drive(s) and having to move them to the array -- this allows it to download straight to the array and then it will just move it to the other path when finished -- still on the same array and filesystem.