coolasice1999

Members
  • Posts

    57
  • Joined

  • Last visited

Everything posted by coolasice1999

  1. https://pastebin.com/xqePujMQ Nevermind... I guess Pia's dns servers are dead, added cloudflare and away it went.
  2. Mine started doing the exact same thing yesterday... Haven't changed a thing other then restoring a previous docker backup
  3. Its not a powerhouse, it can't do a whole lot, but works seamlessly as a kodi sql server for all my media! (after quite a fandangle to get it automated) Very low power... at full parity check with 4 drives and 3 ssds, a Home Security NVR, Wifi Router, and Cable Modem... i'm drawing 99watts of power..... love it. Finally bought a Cyberpower 1500va UPS for those pesky power blips we've been getting during thunderstorms.
  4. I tried to update my docker, but the new v3 radar is passing an error migrating my database to v3... had to roll back to v2 and restore my database from backup...
  5. I picked up a handful of asmedia based 2 port cards to add ssds to my system. Filled up all my 1x slots. Kept my hds on hba card and now have 6 ssds connected.
  6. moving appdata didn't fix it... but i did find that shutting down the array caused it to freeze and after a hard reboot it fixed itself.
  7. That's the plan for tonight after work as the array is currently in use.
  8. not sure how that's possible when the only folder on the cache drive is the appdata folder. Guess I'll have to run a backup tonight and do a restore on that one.
  9. Okay but I only have 4 gigs of data on there how do I find out what else is using all the space? Even Windows tells me there's only 4 gigs of data on there.
  10. Not sure if my cache problem is related to overhead on raid0 configuration or something else entirely. I have a raid0 configuration of 3 small 32gb SSDs which should yield 96GB cache pool which it does but the drive shows 18GB used. My appdata folder is only 4.4GB and is currently the only thing on the cache drive? What could possibly be using 14GB of information that wouldn't be shown in the drive? (there are no hidden files that I missed). Wondering if I should recreate the cache array?
  11. Think i got it sorted... went ahead and recreated a new docker.img on the cache instead of an unassigned device, made it only 12gb and it seems to be working fine and updating as needed. Found out my hba doesn't support trim on ssds which was where the unassigned device was connected. Not sure if that had something to do with it.
  12. So i went ahead and expanded my docker image size to 20gb... and once again, it still doesn't update. the only docker that does not give me problems is duckdns... it updates just fine, all others, binhex-jackett, mariadb, bazarr... none update properly, they look like they are updating, but never properly update. Annoyed to have to delete the docker every time to update it.. could having my docker.img on an unassigned ssd cause the problem?
  13. Yea, I tried to figure out how to pass the variable to the file but alas I have no clue
  14. there already is a tmm gui version, this used the same software just via commandline. there is one catch, if you use radarr/sonarr and your files get updated, tmm will not rescrape the data for the newer file, you have to do that manually with the gui version as it already thinks the movie is scraped. its a limitation to tmm itself.
  15. to change the runtime you have to change the entrypoint.sh and rebuild the docker. I didn't try to hard to make it variable as it worked for my needs.
  16. install the gui version (https://hub.docker.com/r/coolasice1999/tmm) and set it to use the same appdata folder as the cli version. Set media to your media folder the same as the cli. (see my screenshot of settings). start the gui version and configure it how you want your metadata setup. Since they share the same appdata the config from the gui carries over to the cli docker. Once you know the gui works, you can stop that docker and keep the cli running. As currently built it is set to run at 12:30 AM every day. It took me a couple months to figure this all out so I may not remember exactly how I did it.
  17. i copied the ls kodi docker and added a cronjob script to automatically update the library every night regardless of radarr/sonarr.
  18. i created my own docker that uses a cronjob to scrape as needed. https://hub.docker.com/repository/docker/coolasice1999/tmm-cli-cronjob and https://github.com/coolasice1999/tmmcli feel free to copy and create your own docker. It uses the same appdata as the gui version.
  19. I just cant see how extending my free space from 7gb to 15gb would be any help for a <600mb container. Could having my img on an unassigned device have anything to do with it?
  20. Not sure why I would need 15 gigs of free space to update a 600 Mb container? It used to work just fine with an 8 gigabyte image under 6.7.2
  21. So how big should my docker image be? it's more than double my docker total size. My total image size is 12 gb and I'm only using 42% of it. I know mariadb isn't 7 GB large....
  22. I constantly have problems with docker updates. MariaDB is the biggest one, each time there is an update, i have to remove the container and re-add it back in order for it to update properly, otherwise it never updates, it looks like it will update, but just goes back to the same container. Anyone else have this problem? It occurs on multiple dockers, mariadb, binhex-jackett, binhex-delugevpn, and others. I keep seeing this error in the diagnostics "level=error msg="Not continuing with pull after error: context canceled"" I've recreated the docker.img twice thus far and still see this type of problem whenever there are new updates. Anyone have any clues? tower-diagnostics-20200316-2118.zip
  23. it took me quite a while to figure it out including creating my own dockers for both the gui and cli. I wanted a fully automated system for radarr... radarr metadata for kodi is horrible so I wanted more. I created the cli to automatically scan my library for new content and download metadata for it on a routine basis (early morning when everyone is asleep). I also modified a headless kodi docker and set it up to scan for updates every day shortly after the tmm cli finished. All is using cron jobs. I let radarr do all the deleting.